We stand at the threshold of accelerating technological progress, particularly the rapid advancement of AI technology.
Generative AI can now not only speak fluently but also write programs. This not only promotes the efficiency and improvement of human work but also feeds back into the enhancement of generative AI itself.
This is not merely about strengthening the generative AI model's structure or pre-training methods.
As the number of software applications that generative AI can connect to and utilize increases, it will be able to do more than just chat. Moreover, if software is created that allows generative AI to gather the knowledge it needs for a task and retrieve it at the appropriate moment, it can behave more intelligently using the right knowledge without pre-training.
In this way, the progress of AI technology accelerates the entire AI technology field, including applied technologies and applied systems. This acceleration, in turn, recursively leads to further acceleration of AI technology. As AI technology accelerates and AI becomes capable of more things, the places and situations where it is used will naturally increase exponentially.
This will inevitably increase the number of investors and engineers interested in AI technology. Thus, the acceleration of AI technology is also reinforced from a socioeconomic perspective.
On the other hand, such technological progress affects us in various ways, both indirectly and directly.
Generally, technological progress tends to be viewed as a positive thing. While concerns about the risks of new technologies are raised, the positive effects of progress usually outweigh them, and risks can be mitigated over time, so the overall benefits are considered significant.
However, this is only true when the pace of technological progress is moderate. When the acceleration of technological progress exceeds a certain limit, the benefits no longer outweigh the risks.
Firstly, even developers themselves do not fully understand all the characteristics or potential applications of a new technology. Especially regarding applications, it is not uncommon for others to discover surprising uses or combinations with other technologies that developers did not anticipate.
Furthermore, if we broaden our perspective to include these applications and consider what benefits and risks the technology poses to society, virtually no one can fully grasp it.
Such social blind spots in technology, when progress is gradual, are gradually filled over time. Eventually, the technology is applied in society with these blind spots sufficiently addressed.
However, when technological progress exceeds a certain speed, the grace period for addressing social blind spots also shortens. From the perspective of filling social blind spots, the acceleration of technological progress appears as if time compression has occurred relatively.
New technological changes arise one after another, occurring simultaneously across numerous technologies, causing the societal cognitive task of addressing social blind spots to fall behind.
Consequently, we find ourselves surrounded by various technologies with lingering social blind spots.
The potential risks possessed by such technologies can suddenly emerge from our blind spots and cause harm to society. Since risks for which we are unprepared or have no countermeasures suddenly appear, the impact of the damage tends to be greater.
This situation alters the magnitude of the benefits and risks of technological progress. Due to the time compression effect, risks materialize before social blind spots can be filled, thereby increasing the risk associated with each technology.
The self-reinforcing acceleration of generative AI's progress could eventually give rise to countless technologies with social blind spots that are almost impossible to fill, drastically tilting the balance between risks and benefits.
This is a situation we have never experienced before. Therefore, no one can accurately estimate the extent of potential risks as social blind spots or how significant their impact might be. The only certainty is the logical structure that the faster the acceleration, the more the risks increase.
Chronoscramble Society
Moreover, we cannot accurately grasp the current pace of technological progress, let alone what it will be in the future.
This holds true even for generative AI researchers and developers. For example, there are significant differences in opinion among experts regarding when AGI, an AI that surpasses human capabilities in all aspects, will emerge.
Furthermore, generative AI researchers and developers are distinct from experts in applied technologies and applied systems. Therefore, while they may be knowledgeable about the latest research status and future prospects of generative AI, they cannot fully comprehend what applied technologies and applied systems using generative AI already exist, or what possibilities might open up in the future.
And when it comes to applied technologies and applied systems, the possibilities are virtually infinite due to combinations with various existing mechanisms. Even among those researching and developing applied technologies and applied systems, it would be difficult to grasp everything, including items from different genres.
It is even more challenging to infer or predict how such applied technologies and applied systems will proliferate in society and what impacts they will have. Researchers and engineers, in particular, are not necessarily well-versed in or highly interested in societal impacts. Conversely, those highly interested in societal impacts often have inherent limitations in their technical knowledge.
Thus, no one can fully grasp the current state or future vision of generative AI. And there are discrepancies in each person's understanding.
The problem is not merely that discrepancies exist, but that the pace of progress is unknown. We are certainly at the threshold of an era where technological progress is undergoing accelerating time compression, but we lack a common understanding of its speed.
What's worse, there are differences in perception among individuals as to whether technological progress is constant or accelerating. Additionally, even among those who agree on acceleration, perceptions differ greatly depending on whether they believe the acceleration is solely driven by advances in generative AI's core technology, or if they also factor in acceleration due to applied technologies and applied systems, as well as the influx of people and capital from a socioeconomic perspective.
In this way, the variations in understanding the current situation and future vision, coupled with the discrepancies in perceiving the pace of progress, are creating astonishingly large differences in our individual perceptions.
What technological level and social impact will August 2025 represent? And what will 2027 (two years from now) and 2030 (five years from now) bring? This varies greatly from person to person. Moreover, this gap in perception is likely larger now, in 2025 (two years after the generative AI boom in 2023), than it was then.
I call a society where individuals have vastly different perceptions of the times a Chronoscramble Society. "Chrono" is Greek for time.
And within the reality of this Chronoscramble Society, we must confront the problems of time compression and technological social blind spots, which we cannot commonly and accurately perceive.
Vision and Strategy
To consider how to address the problem of technological social blind spots—within the possibility that our own sense of time may not align with actual time compression, and furthermore, in collaboration with others whose perceptions differ from ours—a vision and strategy are indispensable.
A vision here means indicating immutable values and directions, regardless of the prevailing sense of time.
For instance, to put the discussion simply, "ensuring that the risks of technology do not outweigh its benefits" is one important vision. This is a vision that more people can agree upon than, say, "advancing technology" or "minimizing technological risks."
And it is crucial to enable as many people as possible to cooperate toward achieving that vision. Even with agreement on a vision, it cannot be achieved without action.
Here again, it is necessary to formulate a strategy while understanding that we are in a Chronoscramble Society where there are differences in the sense of time. For example, a strategy of making everyone's sense of time align with actual time compression would likely not succeed. It would impose a significant learning burden on individuals, and the energy required for that alone would lead to exhaustion. Moreover, as this gap widens each year, the necessary energy would only increase.
I cannot present every perfect strategy, but one example of a strategy is to utilize something that automatically strengthens over time to achieve the vision.
That is the use of generative AI itself. It's a bit complex because it involves using the very thing we are trying to address, but it is self-evident that when dealing with the problem of time compression, the conventional approach will become increasingly difficult over time. To counter this, there is no choice but to leverage capabilities that are also undergoing time compression to devise countermeasures.
And, if we are lucky, if we can ultimately utilize the capabilities of generative AI itself to 調速 (regulate the speed) of technology development driven by generative AI, and control it so that it does not accelerate beyond limits, we will be considerably closer to solving the problem.
Conclusion
In a Chronoscramble Society, each of us will have multiple, differing blind spots. This is because no one can grasp all cutting-edge information without blind spots and appropriately connect it to estimating the present and predicting the future.
Then, at some trigger, an opportunity arises to suddenly realize the existence of a blind spot. This happens repeatedly, each time a blind spot emerges and its gap is filled.
Each time, our perception of the time axis for our current position and future outlook is greatly compressed. It feels as if we have suddenly leaped through time—a perceived time leap toward the future.
In some cases, multiple blind spots may become apparent within a single day. In such instances, one would experience repeated time leaps in a very short period.
In that sense, unless we acknowledge the existence of our own blind spots and possess a robust vision capable of withstanding multi-stage time leaps, it will become difficult to make accurate critical decisions regarding the future.
In other words, while striving to bring our sense of time closer to reality, the necessity of thinking about things based on principles and rules that transcend eras will increasingly grow.
Furthermore, we must also confront the reality that, amidst time compression, we can no longer implement risk countermeasures at the same pace as before.
Moreover, unless we slow down the speed of this time compression itself, it will exceed the limits of our perception and control.
To achieve this, we must seriously consider utilizing the speed and influence of AI itself, which accelerates due to time compression.
This is similar to what are called built-in stabilizers in economics, such as progressive taxation and social security systems that curb an overheating economy.
In short, we need to devise mechanisms for AI to function not only as a technological accelerator but also as a social built-in stabilizer.