Skip to Content
This article has been translated from Japanese using AI
Read in Japanese
This article is in the Public Domain (CC0). Feel free to use it freely. CC0 1.0 Universal

Time Compression and Blind Spots: The Need for Regulation

We stand at the precipice of accelerating technological progress, particularly the exponential advancement of AI technology.

Generative AI can not only speak fluently but also write programs. This not only promotes the efficiency and improvement of human work but also feeds back into the enhancement of generative AI itself.

This is not just about strengthening the model structure or pre-training methods of generative AI.

As generative AI gains access to more software that it can connect to and utilize, it will be able to do more than just chat. Furthermore, if software is developed that allows generative AI to gather necessary knowledge for its tasks and retrieve that knowledge at appropriate moments, it can behave more intelligently using the right knowledge, even without pre-training.

In this way, the advancement of AI technology accelerates the entire field of AI technology, including applied technologies and systems. This acceleration, in turn, recursively leads to further acceleration of AI technology. Moreover, as AI technology accelerates and AI becomes capable of more things, the places and situations where it is used will naturally increase at an accelerating rate.

This can only increase the number of investors and engineers interested in AI technology. In this way, the acceleration of AI technology is also reinforced from a socio-economic perspective.

On the other hand, such technological progress affects us in various ways, both indirectly and directly.

In general, technological progress tends to be viewed as a good thing. While concerns about the risks of new technologies are raised, the positive effects of progress generally outweigh them, and risks can be mitigated over time, so overall, the benefits are considered significant.

However, this is only true when the pace of technological progress is gradual. When the pace of technological progress accelerates and exceeds a certain limit, the benefits no longer outweigh the risks.

Firstly, even the developers themselves do not fully understand the nature or the full range of applications of new technologies. Especially regarding the scope of applications, it is not uncommon for others to discover uses or combinations with other technologies that surprise even the developers.

Furthermore, when broadening the scope to include how such applications will benefit and risk society, almost no one knows the full extent.

When progress is gradual, such societal blind spots in technology are gradually filled over time, and eventually, the technology is applied in society with sufficient blind spots eliminated.

However, when technological progress exceeds a certain speed, the grace period for filling societal blind spots also shortens. The acceleration of technological progress appears, from the perspective of filling societal blind spots, as if time has been compressed relatively.

New technological changes occur one after another, and these happen simultaneously across numerous technologies, making it impossible for the social cognitive process of filling societal blind spots to keep up.

As a result, we will be surrounded by various technologies that remain in a state of societal blind spots.

The potential risks possessed by such technologies can suddenly emerge from our blind spots and cause harm to society. Since risks for which we are unprepared or have not taken countermeasures suddenly appear, the impact of the damage tends to be greater.

This situation changes the magnitude of the benefits and risks of technological progress. Due to the time compression effect, as risks materialize before societal blind spots are filled, the risks of each technology increase.

The self-reinforcing acceleration of generative AI's progress could eventually create countless technologies with almost unfillable societal blind spots, potentially significantly tipping the balance between risk and benefit.

This is a situation we have never experienced. Therefore, no one can accurately estimate the degree of risk that will potentially exist as societal blind spots, nor how significant their impact will be. The only certainty is the logical structure that the faster it accelerates, the more risks will increase.

Chronos-Scramble Society

On the other hand, we cannot accurately grasp the current pace of technological progress, nor what it will be in the future.

This is true even for generative AI researchers and developers. For example, there is a wide divergence of opinion among experts regarding when AGI, an AI that surpasses human capabilities in all aspects, will emerge.

Furthermore, generative AI researchers and developers are different people from experts in its applied technologies and systems. Therefore, while they may be knowledgeable about the latest research status and future prospects of generative AI, they cannot grasp everything about what applied technologies and systems using generative AI already exist or what future possibilities are opening up.

Moreover, when it comes to applied technologies and systems, the possibilities are virtually limitless when combined with various existing mechanisms. Even among people researching and developing applied technologies and systems, it would be difficult to grasp everything, including those in different genres.

It is even more difficult to infer or predict how such applied technologies and systems will spread in society and what impact they will have. In particular, researchers and engineers are not necessarily knowledgeable about or highly interested in societal impact. On the other hand, the technological insights of those who are highly interested in such societal impact inevitably have limitations.

Thus, no one can grasp the entirety of generative AI's current state or its future vision. And there are discrepancies in each person's understanding.

The problem is not merely that there are discrepancies, but that the pace of progress is unknown. We are certainly at the threshold of an era where technological progress is accelerating and time is being compressed, but we do not have a common understanding of how fast that pace is.

To make matters worse, there are differences in perception among people regarding whether the pace of technological progress is constant or accelerating. Additionally, even among those who agree on acceleration, perceptions differ greatly depending on whether they recognize that acceleration is caused solely by the progress of generative AI's foundational technology, or if they also consider acceleration due to applied technologies and systems, as well as acceleration due to the influx of people and capital from socio-economic factors.

In this way, the variability in the perception of the current state and future vision, and the discrepancy in the perception of the pace of progress, create surprisingly large differences in our individual understandings.

What is the technological level and societal impact in August 2025? And what will it be like in 2027 (two years later) or 2030 (five years later)? These widely vary from person to person. Moreover, the difference in that perception is probably greater now in 2025, two years after the generative AI boom arrived in 2023.

I call a society where individual perceptions of the era differ so greatly a "Chronos-Scramble Society." Chronos is the Greek word for time.

And within the reality of this Chronos-Scramble Society, we must confront the problems of time compression and technological societal blind spots, which we cannot commonly and correctly perceive.

Vision and Strategy

In a situation where one's own sense of time might not align with actual time compression, and needing to address the problem of technological societal blind spots with others who have differing perspectives, vision and strategy become indispensable.

Here, vision means showing immutable values and directions, regardless of one's sense of time.

For example, to put the discussion simply, "ensuring that the risks of technology do not outweigh its benefits" is one important vision. This is a vision that more people can agree on than visions like "advancing technology" or "minimizing technological risks."

And it is crucial to enable as many people as possible to cooperate towards the realization of that vision. Even if a vision is agreed upon, it cannot be achieved without action.

Here, too, a strategy must be formulated with an understanding that we are in a Chronos-Scramble Society with differing senses of time. For example, a strategy of making everyone's sense of time align with actual time compression would not work. It would impose a huge learning burden on individuals, exhausting them with just the energy required for it. Moreover, as this gap widens year by year, the necessary energy will also increase.

I cannot present all perfect strategies, but one example of a strategy is to leverage something that automatically strengthens over time to achieve the vision.

This refers to the use of generative AI itself. While it's somewhat complicated to use the very thing one is trying to address, it's clear that when dealing with the problem of time compression, conventional methods will become increasingly difficult to handle over time. To counteract this, there is no choice but to consider countermeasures by utilizing capabilities that are also being compressed in time.

And hopefully, if we can eventually leverage the capabilities of generative AI itself to regulate technology development caused by generative AI and control it from accelerating beyond its limits, we will be considerably closer to solving the problem.

Conclusion

In a Chronos-Scramble Society, each of us will have multiple different blind spots. This is because no one can grasp all frontline information without blind spots in every aspect and appropriately connect it to current estimations and future predictions.

And at some point, an opportunity will suddenly arise to realize that a blind spot existed there. This will happen repeatedly, each time a blind spot forms and the gap is filled.

Each time, our perception of the timeline of our current position and future vision will be significantly compressed. It feels as if we have suddenly leaped through time. It is a cognitive time-leap towards the future.

In some cases, multiple blind spots may be revealed within a single day. In such instances, one experiences multiple time-leaps in a very short period.

In that sense, unless we acknowledge the existence of our own blind spots and possess a robust vision capable of withstanding multi-stage time-leaps, it will become difficult to make accurate critical decisions concerning the future.

In other words, while striving to bring our sense of time closer to reality, the necessity of thinking based on principles and precepts that transcend eras will increasingly grow.

And in the midst of time compression, we must also acknowledge the reality that risk countermeasures cannot be implemented at the same pace as before.

Furthermore, if the speed of this time compression itself is not slowed down, it will exceed the limits of our perception and control.

To achieve this, we must seriously consider utilizing the speed and influence of AI itself, which is accelerating due to time compression.

This is similar to mechanisms like progressive taxation or social security systems that curb an overheating economy, what are known as "built-in stabilizers."

In other words, we need to think about mechanisms that allow AI to function not only as a technological accelerator but also as a social built-in stabilizer.