Skip Go Content
AI translate this article from Japanese
Read in Japanese
This article dey for Public Domain (CC0). Feel free to use am anyhow you like. CC0 1.0 Universal

Time Wey Dey Rush Us & Things Wey We No Know: Why We Need Rules

We dey stand for front of big big tech advance wey dey happen quick quick, especially with AI technology wey dey grow sharp sharp.

Generative AI no just sabi talk well well, e fit even write programs. This one no just make human work faster and better, e dey still help to make generative AI itself better.

This one no just be about to make generative AI model structure or pre-training methods strong.

As generative AI dey get access to more software wey e fit connect to and use, e go fit do pass just chatting. More so, if dem develop software wey go allow generative AI to gather wetin e need for en work and bring out that knowledge at the right time, e go fit behave more intelligently using the right knowledge, even if dem no pre-train am.

Na so AI technology advance dey make the whole AI field fast, including how dem dey use am and the systems. This quick quick advance, na im dey carry go make AI technology advance more again. And as AI technology dey advance and AI fit do more things, the places and situations wey dem dey use am go naturally increase quick quick.

This one go only make the number of investors and engineers wey dey interested in AI technology plenty. Na so the advance of AI technology dey also get strong from wetin dey happen for society and economy.

For the other side, this kind tech advance dey affect us for different ways, both direct and indirect.

Generally, dem dey see tech advance as a good thing. Even though dem dey worry about the risks of new technologies, the good things wey come from advance usually pass the bad, and dem fit reduce risks over time, so for general, the benefits plenty.

But, this one dey true only when tech advance dey happen small small. When tech advance dey quick and pass some level, the good things no go fit pass the risks again.

Firstly, even the developers themselves no too understand the way new technologies dey work or all the things wey dem fit use am for. Especially for how dem fit use am, e no dey strange for other people to see uses or combinations with other technologies wey go even surprise the developers.

More so, when we look wider to include how these applications go benefit and risk society, almost nobody sabi everything.

When progress dey small small, this kind societal blind spots for technology dey full small small over time, and eventually, dem go use the technology for society after dem don comot enough blind spots.

But, when tech advance pass some speed, the time dem give to full societal blind spots go short too. The quick quick advance of technology go appear, from the way wey dem dey full societal blind spots, like say time don compress.

New tech changes dey happen one after another, and dem dey happen at the same time for plenty technologies, making it impossible for the way society dey learn to full societal blind spots to keep up.

As a result, we go dey surrounded by different technologies wey still dey for a state of societal blind spots.

The possible risks wey this kind technologies get fit just show face from our blind spots and cause harm to society. Since risks wey we no prepare for or we no take any step against just show face suddenly, the effect of the damage usually dey big.

This situation go change how big the benefits and risks of tech advance be. Because of the time compression effect, as risks dey show face before societal blind spots don full, the risks of each technology go increase.

The way generative AI advance dey make itself strong and quick go fit eventually create plenty technologies wey get societal blind spots wey hard to full, and e fit scatter the balance between risk and benefit well well.

This na something wey we never experience before. So, nobody fit know for sure how much risk go dey as societal blind spots, or how big their effect go be. The only sure thing na the logic say the faster e dey quick, the more risks go increase.

Chronos-Scramble Society

For the other side, we no fit know exactly how fast technology dey grow now, or how e go be for future.

This one dey true even for generative AI researchers and developers. For example, the experts no dey agree on when AGI go show face – an AI wey go pass human brain for everything.

Also, generative AI researchers and developers no be the same people as experts for the way dem dey use the technology and the systems. So, even if dem sabi about the latest research and wetin generative AI fit do for future, dem no fit know everything about the way generative AI don dey used for technology and systems now, or wetin fit still happen for future.

Wetin pass dat one, when e come to how dem dey use technology and systems, the possibilities almost no get end when dem combine am with other things wey don dey exist. Even among people wey dey research and develop how dem dey use technology and systems, e go hard to know everything, including those for different areas.

E even hard pass to guess or predict how this kind technology and systems go spread for society and wetin their effect go be. Especially, researchers and engineers no really sabi or dey too interested in how e go affect society. For the other side, the understanding of technology for those wey dey too interested in how e go affect society must get limits.

So, nobody fit know everything about generative AI current state or wetin e go be for future. And different people get different understanding.

The wahala no just be say different understanding dey, but say the speed of progress no dey known. We surely dey for the beginning of a time when technology dey grow quick quick and time dey rush, but we no get common understanding of how fast that speed be.

To make matter worse, people get different ideas about whether the speed of tech progress dey constant or e dey quicken. And also, even among those wey agree say e dey quicken, their ideas differ a lot depending on whether dem believe say the quickening na just because generative AI main technology dey grow, or if dem also consider quickening because of how dem dey use the technology and systems, as well as quickening because of how people and money dey enter from society and economy.

Na so, the way people dey see the current situation and future, and the difference in how dem dey see the speed of progress, dey create very big differences in how each of us understand things.

Wetin be the level of technology and wetin be society effect for August 2025? And wetin e go be like for 2027 (two years later) or 2030 (five years later)? These things dey different for different people. And the difference in that understanding probably dey bigger now in 2025, two years after the generative AI boom show face in 2023.

I dey call a society where individual understanding of the time dey different like this, "Chronos-Scramble Society." Chronos na Greek word for time.

And inside the reality of this Chronos-Scramble Society, we must face the problems of time rushing and societal blind spots for technology, wey we no fit commonly and correctly understand.

Wetin You See and How You Plan Am

For situation where person idea of time fit no dey the same with how time dey rush for real, and you need to solve the problem of technology blind spots for society with other people wey get different thinking, then wetin you see (your vision) and how you plan am (strategy) become very important.

Here, vision mean say you go show values and directions wey no dey change, no matter how person feel time dey go.

For example, to make the talk simple, "make sure say the wahala wey technology dey bring no pass the good e dey do" na one important vision. This kind vision na wetin plenty people fit agree on pass visions like "make technology dey advance" or "make technology wahala dey small pass."

And e dey very important to make as many people as possible fit work together to make that vision happen. Even if everybody agree on a vision, e no fit happen if nobody do anything.

For here too, you must plan your strategy with understanding say we dey for "Chronos-Scramble Society" where people get different ideas about time. For example, to plan say everybody sense of time must match with how time dey rush for real no go work. E go put heavy burden for people to learn, and e go tire dem with just the energy wey e go need for that. Wetin pass dat one, as this gap dey wide every year, the energy wey go needed go increase too.

I no fit show all perfect strategies, but one example of strategy na to use something wey dey automatically strong pass over time to reach your vision.

This one mean to use generative AI itself. Even though e dey somehow complicated to use the very thing wey person dey try to deal with, e clear say when you dey deal with the problem of time rushing, the normal ways go dey hard to handle as time dey go. To stop this one, you no get choice but to think of ways to fight am by using powers wey time dey also rush.

And hopefully, if we fit eventually use the power of generative AI itself to control technology development wey generative AI cause and stop am from rushing pass en limit, we go dey very close to solving the problem.

Conclusion

For inside Chronos-Scramble Society, each of us go get different different blind spots. This na because nobody fit know all the latest information without any blind spot for everything and connect am well well to current estimates and future predictions.

And at some point, chance go just open for person to realize say blind spot dey there before. This one go happen over and over again, anytime blind spot form and dem close the gap.

Every time, how we see the timeline of our current position and future vision go rush well well. E go be like say we just jump through time. Na mental time-jump towards the future.

For some cases, dem fit reveal plenty blind spots inside one day. For such times, person go experience plenty time-jumps for very short period.

In that sense, unless we agree say our own blind spots dey and we get strong vision wey fit stand different stages of time-jumps, e go hard to make correct serious decisions about the future.

In other words, while we dey try to make our sense of time close to reality, the need to think based on principles and rules wey fit last for any era go dey grow more and more.

And for inside the time wey dey rush, we must also agree say the way we dey fight risks no fit be the same speed as before.

Wetin pass dat one, if the speed of this time rushing itself no slow down, e go pass the limit of our understanding and control.

To make this happen, we must seriously consider to use the speed and power of AI itself, wey dey quicken because time dey rush.

This one be like things like progressive taxation or social security systems wey dey control economy wey don hot too much, wetin dem call "built-in stabilizers."

In other words, we need to think about ways wey go allow AI to work not just as something wey dey make technology fast, but also as a social built-in stabilizer.