We dey for de edge of how technology dey grow fast-fast, especially how AI technology dey advance quick-quick.
Generative AI don sabi talk well-well now, and e fit even write programs. Dis one no just make human work faster and better, but e also dey help generative AI itself to improve.
Dis one no just be about makin' di generative AI model structure or pre-training methods stronger.
As di number of software applications wey generative AI fit connect to and use dey increase, e go fit do pass just chatin'. On top of dat, if dem create software wey go allow generative AI to gather di knowledge wey e need for one task and bring am out at di right time, e go fit behave more intelligently by usin' di right knowledge without pre-training.
For dis way, di progress of AI technology dey make di whole AI technology field faster, including applied technologies and applied systems. Dis acceleration, for its own part, dey lead to further acceleration of AI technology again. As AI technology dey go faster and AI fit do more things, di places and situations where dem dey use am go naturally increase well-well.
Dis one go surely make di number of investors and engineers wey get interest for AI technology increase. So, di acceleration of AI technology dey also get strong support from how society and economy dey work.
On di other hand, dis kind technological progress dey affect us for different ways, both indirectly and directly.
Generally, pipo dey always see technological progress as a good thing. Even though pipo dey raise concerns about de risks of new technologies, de good effects of progress dey usually pass de bad ones, and dem fit reduce risks over time. So, dem dey see de overall benefits as plenty.
But, dis one dey true only when technological progress dey move small-small. When how fast technological progress dey go pass one certain limit, de benefits no dey fit pass de risks again.
First, even de developers demsef no fully understand all de characteristics or wetin new technology fit do. Especially for applications, e no dey strange for other pipo to discover surprising uses or ways to combine am with other technologies wey de developers no even expect.
On top of dat, if we look far and wide to include all dis applications and consider wetin good and bad dis technology fit bring to society, practically nobody fit fully understand am.
Dis kind social blind spots for technology, when progress dey slow, dey gradually fill up over time. Eventually, dem go apply de technology for society after dem don sufficiently address all dis blind spots.
But, when technological progress pass a certain speed, de grace period for addressin' social blind spots go also reduce. From de side of fillin' social blind spots, de acceleration of technological progress dey look like say time compression don happen relatively.
New technological changes dey come out one after di other, dey happen at de same time for plenty technologies. Dis one dey make de societal cognitive work of addressin' social blind spots fall behind.
Because of dis, we go find oursef surrounded by different technologies wey still get social blind spots.
De potential risks wey dis kind technologies get fit suddenly come out from our blind spots and cause harm to society. Since risks wey we no prepare for or no get solution for dey suddenly appear, de impact of de damage dey usually be bigger.
Dis situation dey change how big de benefits and risks of technological progress be. Because of time compression effect, risks dey happen before social blind spots fit fill up, and dis dey make de risk wey dey connected to each technology increase.
De self-reinforcing acceleration of generative AI progress fit eventually bring about countless technologies with social blind spots wey almost impossible to fill, and dis fit seriously shift de balance between risks and benefits.
Dis na one situation wey we never experience before. So, nobody fit accurately estimate how much potential risks dey as social blind spots or how significant their impact fit be. Di only thing wey sure na de logical way e be say de faster de acceleration, de more de risks go increase.
Chronoscramble Society
On top of dat, we no fit really understand how fast technological progress dey go now, talk less of how e go be for future.
Dis one dey true even for generative AI researchers and developers. For example, experts get different-different opinions about when AGI, an AI wey go do pass human abilities for every area, go show face.
Plus, generative AI researchers and developers dey different from experts for applied technologies and applied systems. So, even if dem sabi di latest research and wetin generative AI fit be for future, dem no fit fully understand all di applied technologies and applied systems wey dey already use generative AI, or wetin new possibilities fit open up for future.
And when e reach applied technologies and applied systems, di possibilities be almost endless because dem fit combine with different existing mechanisms. Even among dem wey dey research and develop applied technologies and applied systems, e go hard to understand everytin, including tins from different categories.
E even hard pass to guess or predict how dis kind applied technologies and applied systems go spread for society and wetin their impact go be. Researchers and engineers, especially, no really sabi or get much interest for how e go affect society. On di other hand, dem wey get plenty interest for societal impacts dey often get limits for their technical knowledge.
So, nobody fit fully understand generative AI current state or future vision. And pipo understanding dey different.
Di problem no just be say differences dey, but say de speed of progress no clear. We dey sure say we dey for di beginning of an era where technological progress dey make time compression happen faster, but we no get one common understanding of its speed.
Wetin worse pass, pipo get different-different ideas whether technological progress dey constant or dey accelerate. On top of dat, even among dem wey agree say e dey accelerate, their understanding dey very different based on whether dem believe say di acceleration dey solely driven by advances in generative AI core technology, or if dem also consider acceleration because of applied technologies and applied systems, as well as plenty pipo and money comin' in from society and economy.
For dis way, de different understandings of di current situation and future vision, plus de different ideas about how fast progress dey go, dey create surprising big differences for our individual perceptions.
Wetin technological level and social impact August 2025 go represent? And wetin 2027 (two years from now) and 2030 (five years from now) go bring? Dis one dey vary well-well from person to person. On top of dat, dis difference for perception probably big pass now, for 2025 (two years after di generative AI boom for 2023), than e be then.
I dey call one society where individuals get very different perceptions of de times a Chronoscramble Society. "Chrono" na Greek word for time.
And inside de reality of dis Chronoscramble Society, we must face di problems of time compression and technological social blind spots, wey we no fit commonly and accurately perceive.
Vision and Strategy
To think about how to solve de problem of social blind spots for technology—with de possibility say how we feel about time no fit match de real time compression, and also to work with other pipo wey their perceptions different from ours—we must get a vision and a strategy.
Vision here mean to show values and directions wey no go change, no matter how pipo dey feel about time.
For example, to explain dis discussion simply, "makin' sure say de risks of technology no pass its benefits" na one important vision. Dis vision na wetin many pipo fit agree on pass, for example, "advancing technology" or "minimizim' technological risks."
And e very important to make as many pipo as possible fit work together to achieve dat vision. Even if dem agree on a vision, e no fit happen without action.
Here again, we must plan a strategy with di understanding say we dey for a Chronoscramble Society where pipo dey see time differently. For example, a strategy to make everybody time perception match di real time compression probably no go work. E go put plenty learnin' burden on individuals, and di energy wey e need for dat alone go just tire dem out. On top of dat, as dis gap dey grow every year, di energy wey e need go also increase.
I no fit give all de perfect strategies, but one example of a strategy na to use sometin wey go automatically get stronger over time to achieve de vision.
Dat na to use generative AI itself. E small complex because e involve usin' di very thing wey we dey try to solve, but e clear say when we dey deal with de problem of time compression, de normal approach go dey harder and harder over time. To fight dis, we no get any choice but to use capabilities wey also dey undergo time compression to find solutions.
And, if we lucky, if we fit eventually use di capabilities of generative AI itself to regulate de speed of technology development wey generative AI dey drive, and control am so e no go accelerate pass limits, we go dey much closer to solvin' di problem.
Conclusion
For a Chronoscramble Society, each of us go get different-different blind spots. Dis na because nobody fit understand all di new information wey dey without blind spots and connect am well to guess di present and predict di future.
Then, at some point, an opportunity go come to suddenly realize say one blind spot dey. Dis dey happen many times, each time one blind spot show face and dem fill di gap.
Every time, di way we see de time for our current situation and future outlook go squeeze well-well. E go feel like say we just jump through time—a time leap for our perception towards di future.
Sometimes, plenty blind spots fit show face for just one day. For such cases, person go experience repeated time leaps for a very short period.
For dat sense, unless we agree say our own blind spots dey and we get a strong vision wey fit stand many time leaps, e go hard to make correct and important decisions about di future.
In other words, while we dey try to make our sense of time closer to reality, di need to think about things based on principles and rules wey go pass through different eras go dey increase.
On top of dat, we must also face di truth say, amidst time compression, we no fit dey put risk solutions for ground at di same speed as before.
Wetin more, unless we slow down di speed of dis time compression itself, e go pass di limits of how we fit perceive and control.
To achieve dis, we must seriously consider using di speed and influence of AI itself, wey dey accelerate because of time compression.
Dis one be like wetin dem dey call built-in stabilizers for economics, like progressive taxation and social security systems wey dey control an economy wey dey too hot.
In short, we need to find ways for AI to work not just as a technology accelerator but also as a social built-in stabilizer.