Here, I go arrange di Artificial Learning Intelligence System (ALIS) by tok about wetin e be, how e dey work, how dem go fit design am well, and di way dem go take develop am.
Wetin E Mean (Concept)
Di AI wey dey do generative tins now, especially di big language models, dem dey train dem with "supervised learning" wey dey use neural networks.
We dey see dis neural network training as "innate learning" (wey dem born am with).
ALIS na system wey go fit understand tins well well by joining dis "innate learning" with anoda one wey dem call "acquired learning" (learning wey dem learn afta dem don born). Dis "acquired learning" process dey different from di "innate learning."
For dis "acquired learning" side, di knowledge wey dem learn go dey store outside di neural network, and dem go use am when dem dey try to understand something.
So, di main tin wey ALIS dey focus on for technical side na how to comot useful knowledge, keep am well, and pick di right one to use when e dey reason.
Plus, ALIS no be just one small technology; e be a whole system wey dey combine di innate learning and acquired learning together.
Wetin Dey Make Learning Intelligence System (Elements)
ALIS dey see di "innate learning" wey don dey, and di "acquired learning" wey go still come, as dem dey work with di same rules inside di way dem take learn and reason.
To explain how ALIS dey learn, we go define five main parts of a learning intelligence system:
Di first one na di Intelligent Processor. Dis one na di system wey dey use knowledge to reason and dey comot knowledge for learning.
Examples of dis intelligent processor na LLMs (Large Language Models) and some parts of human brain.
Di second one na di Knowledge Store. Dis na where dem dey keep di knowledge wey dem don comot, so dem fit go take am any time dem need am.
For LLMs, di knowledge store na di parameters of di neural network. For human beings, e be like di long-term memory for inside brain.
Di third one na di World. Dis na di outside environment wey learning intelligence systems like humans or ALIS dey see.
For humans, di world na di reality wey we dey live in. For LLMs, di way wey e dey receive wetin e don produce and dey get feedback na wetin dem consider as di world.
Di fourth one na di State Memory. Dis one na like small temporary memory inside, like a scratchpad, wey a learning intelligence system dey use when e dey reason.
For LLMs, dis na di memory space dem dey use when dem dey reason, wey dem call hidden states. For humans, e be like short-term memory.
Di fifth one na di Framework. Dis na wetin dem dey call "way of thinking." For learning intelligence system language, e mean di rules for choosing di right knowledge when e dey reason, and di proper structure for arranging di state memory.
For LLMs, e be di meaning structure of hidden states, and usually, humans no dey fit understand wetin e contain well. Also, di way dem dey select knowledge na inside di attention mechanism, wey dey choose which existing tokens to look at for each token wey dey process.
For humans, as we talk before, e na di way of thinking. When person dey think with a particular way of thinking, e go remember some skills and put dem inside short-term memory from long-term memory. Den, e go arrange di information wey e dey see now according to di way of thinking to understand wetin dey happen.
How Learning Intelligence System Dey Work (Principles)
A learning intelligence system dey operate like dis:
Di intelligent processor go do something for di world. Di world go respond with wetin happen based on wetin di processor do.
Di intelligent processor go comot useful knowledge from dis results and save am for inside di knowledge store.
When di intelligent processor dey do tins for di world many times, e go select knowledge from di knowledge store and use am to change di way e dey do im work.
Dis na di simple way e dey work.
But, di main thing na say, di way dem dey comot, save, select, and use knowledge go determine if di system fit truly learn something meaningful.
Humans get ways wey make dem fit comot, save, select, and use knowledge well well, and dis na wetin make dem fit learn.
Neural networks, including LLMs, get ways to save, select, and use knowledge, even though na external teacher dey handle di part of comoting knowledge. Dis na wetin make dem fit learn as long as a teacher dey provide wetin dem need.
Wetin pass dat, a learning intelligence system fit learn even more complex tins by also learning how to comot, save, and select frameworks, and how to use dem for state memory, as knowledge.
Different Kinds of Knowledge
Based on dis principle, when we dey design "acquired learning," we must make clear wetin kind information di acquired knowledge go be.
E fit possible to learn acquired knowledge separately as parameters for a neural network.
But, acquired knowledge no need to just be parameters for neural networks alone. One real option na knowledge wey dem don write down as text for normal language (natural language).
If knowledge dey written as text for natural language, LLMs (Large Language Models) fit use their natural language processing power to comot am and use am. And e fit also be treated as data for normal IT system, wey go make am easy to save and select.
Wetin pass dat, knowledge wey dem write down in natural language go easy for humans and other LLMs to check, understand, and sometimes, even change.
Dem fit also share am with other learning intelligence systems, and join or separate am.
Because of all dis reasons, di acquired knowledge for di ALIS concept go first be designed to target knowledge wey dem don write down as text for natural language.
Acquired State Memory and Framework
I don explain why e good to choose knowledge wey dem write down for natural language as acquired knowledge.
Na so too, natural language text fit also be used for di state memory and di framework when we dey reason.
Di framework, wey be like di way of thinking, fit also dey stored and used for di knowledge store as knowledge wey dem write down for natural language.
When you dey start or update states based on how dat framework arrange tins, you fit use state memory wey dey based on text.
If we design ALIS to use text format for not just acquired knowledge but also for frameworks and state memory, ALIS go fit use di power of LLMs (Large Language Models) for processing natural language for both acquired learning and general reasoning.
Formal Knowledge
Knowledge wey dem don acquire, frameworks, and di state memory fit no just be natural language text, but dem fit also be stronger formal languages or formal models.
Even as I write "select," di main aim for ALIS na to put many ways to learn acquired knowledge, so e fit combine di innate learning and di acquired learning for better use.
Knowledge wey formal languages or formal models dey represent fit be more strict and clear, without any confusion.
Wetin pass dat, if dem express a framework using a formal language or formal model, and an initial state dey expand for state memory, then an intelligent processor (not an LLM) fit process a formal model to do strict simulations and logical reasoning.
One very good example of dis kind of formal languages and formal models na programming languages.
As di system dey learn about di world, if e fit express di hidden rules and ideas as programs inside a framework, den a computer fit simulate dem.
Column 1: Different Kinds of Knowledge
As we arrange di knowledge inside a learning intelligence system, e clear say dem fit divide am into three main systems and two types.
Di three systems na: network parameter knowledge wey neural networks dey handle, natural knowledge for natural language, and formal knowledge for formal languages.
Di two types na: stateless and stateful.
Stateless network parameter knowledge na knowledge wey you just know, like wetin you see for deep learning AI. For example, di way cat and dog dey look, wey you no fit describe with mouth, dem fit learn am as stateless network parameter knowledge.
Stateful network parameter knowledge na knowledge wey no too clear, and e dey come from processes wey dey repeat, like wetin dey for generative AI.
Stateless natural knowledge na knowledge like di meaning of a word.
Stateful natural knowledge na knowledge wey include di full context inside a sentence.
Some natural knowledge don dey inside stateful network parameter knowledge from birth, but some knowledge fit also be acquired afta birth from natural language text.
Stateless formal knowledge na knowledge wey you fit express with mathematical formulas wey no dey repeat. Stateful formal knowledge na knowledge wey you fit express with programs.
Person own brain short-term memory fit also be used as a state memory for natural and formal knowledge.
But, because e be short-term memory, e get problem say e hard to keep a steady state. Also, e no good for holding knowledge for a clear, no-ambiguity state.
On di other hand, paper, computers, or smartphones fit be used as state memory for writing down and editing natural language text, formal languages, or formal models.
Generally, people dey see data on paper or computers as something to store knowledge as a knowledge store, but e fit also be used as state memory for arranging thoughts.
So, e clear say humans dey do intellectual work by cleverly using dis three systems and two types of knowledge.
ALIS too get di power to seriously improve im abilities by making am possible and better for intellectual work wey dey use dis same three systems and two types of knowledge.
Specifically, ALIS get di strong point of being able to use plenty knowledge stores and state memory. Wetin pass dat, e fit easily prepare many of each and do intellectual tasks by changing or combining dem.
Column 2: Intellectual Orchestration
Even though e good say ALIS fit save plenty knowledge for di knowledge store, just having plenty knowledge no necessarily mean e go be better for intellectual work. Dis na because generative AI get limit to di number of "tokens" e fit use at once, and knowledge wey no relevant fit just cause confusion (noise).
But then, if we arrange di knowledge store well well, and create special, plenty-plenty knowledge stores wey get only di knowledge needed for particular intellectual work, e go fit reduce di problem of token limits and noise.
Di only catch na say, dose specialized knowledge stores go only work for those specific intellectual tasks.
Plenty intellectual activities na mixture of different intellectual tasks. So, by dividing knowledge into specialized knowledge stores based on di kind of intellectual task, and breaking down intellectual activity into smaller tasks, ALIS fit do di whole intellectual activity by switching between di specialized knowledge stores as e need.
Dis one be like an orchestra wey get professional musicians wey dey play different instruments, and a conductor wey dey lead di whole thing.
Through dis system technology, "intellectual orchestration," ALIS go fit arrange im intellectual activities.
ALIS Basic Design and How to Develop Am
From here, I go arrange di way we go take develop ALIS.
As we don talk for di principles and columns, ALIS naturally dey designed to easily expand im functions and resources. Dis na because di main thing about ALIS no be about specific functions, but about di way e dey comot knowledge, save am, pick am, and use am.
For example, dem fit prepare many different ways to comot knowledge, and then pick from dem or use dem all at once, depending on how dem design di system.
Wetin pass dat, ALIS fit even do dis selection by itself.
Saving, selecting, and using too fit be freely chosen or done at di same time.
Because of dis, dem fit develop ALIS small small and quickly, without needing to design di whole thing from beginning to end like "waterfall method."
How ALIS Go Start
Now, make we design one very simple ALIS.
Di main screen wey user go see (UI) go be like di chat AI wey we all sabi. For beginning, anything wey user type in go straight go to di LLM. Wetin di LLM answer go show for di screen, and di system go wait for di user to type anoda tin.
When di next thing come in, di LLM no go just get di new thing wey dem type, but e go also get all di chat dem don do before, between di user and di LLM.
Behind dis chat AI screen, we go arrange something wey go fit comot useful knowledge from di chat dem don do.
Dem fit add dis one to di chat AI system as something wey go run when conversation finish or at regular times. Of course, na LLM dem go use to comot di knowledge.
Dem go give dis LLM di ALIS concept and im principles, plus how to comot knowledge, as system prompts (instructions). If di knowledge no come out as dem want, dem go fit refine di system prompts by trying and testing.
Di knowledge wey dem comot from di chat history go dey saved straight inside a "knowledge lake." A knowledge lake na just a way to save knowledge plainly, without any structure, before dem arrange am.
Next, we go prepare a way to structure di knowledge to make am easy to pick from di knowledge lake.
Dis one mean say dem go provide "embedding vector stores" for searching by meaning, like dem dey use for RAG (Retrieval Augmented Generation), and "keyword indexes," among oda tins.
Wetin pass dat, dem fit generate a knowledge graph or arrange tins by categories.
Dis collection of structured information for di knowledge lake go be called a "knowledge base." Dis whole knowledge base and knowledge lake go make up di "knowledge store."
Next, we go join di knowledge store with di chat UI processing.
Dis one na basically di same as how general RAG mechanism dey work. For wetin user type in, dem go select relevant knowledge from di knowledge store and give am to di LLM along with wetin di user type.
Dis one go make di LLM automatically use knowledge when e dey process wetin di user type.
Dis way, knowledge go dey gather with every conversation wey user get, and dis go make a simple ALIS wey dey use knowledge wey e don gather from past conversations.
Simple Example
For example, imagine one person dey build web application using dis simple ALIS.
Di person wey dey use am report say di code wey di LLM give am bring error. Afta di user and di LLM work together to fix di problem, dem find out say di external API specification wey di LLM sabi don old, and di program work well well afta dem adjust am to di latest API specification.
From dis chat wey dem do, ALIS fit now gather knowledge for im knowledge store: specifically, say di API specification wey di LLM sabi don old, and wetin di latest API specification be.
Den, di next time dem wan create program wey go use di same API, ALIS go fit use dis knowledge to generate a program based on di latest API specification from di start.
How We Go Make Di First ALIS Better
But for dis tin to happen, dem must select dis knowledge when di user type something. E fit be say dis knowledge no go directly relate to wetin di user type, because di API name wey cause di problem fit no show for wetin di user type.
For dat kind case, di API name go only show when di LLM dey respond.
So, we go extend di simple ALIS small small by adding ways to do pre-analysis (check before) and post-checking (check after).
Pre-analysis similar to di "thought mode" for di latest LLMs. Dem go prepare a memory wey fit hold text as state memory, and di system go tell di LLM to do pre-analysis immediately e receive wetin di user type.
Di result of di LLM's pre-analysis go dey stored for di state memory. Based on dis pre-analysis result, dem go select knowledge from di knowledge store.
Den, di chat history, pre-analysis result, knowledge wey relate to wetin di user type, and knowledge wey relate to di pre-analysis result go pass to di LLM to get a response.
Wetin pass dat, di result wey di LLM give go also be used to search for knowledge from di knowledge store. Including di knowledge wey dem find there, dem go ask di LLM to do a post-check.
If any problem show, di problematic areas and reasons for di warning go dey included and passed back to di chat LLM.
By giving opportunities to select knowledge during pre-analysis and post-checking, we fit increase di chances of using di knowledge wey don gather.
Wetin We Expect
Dis way of building ALIS from di start and then dey improve am to cover im weaknesses just show perfectly how agile development dey work and how ALIS fit dey improve small small.
Wetin pass dat, as we don show with example, dis first ALIS perfect for software development. Dis na because e be area wey dem need am well well and e easy to gather knowledge clearly there.
E be area wey tins dey clear-cut, either black or white, yet e still be important area where you need to try and error, and gather knowledge repeatedly.
Plus, since ALIS development itself na software development, di fact say ALIS developers fit be ALIS users themselves also dey sweet.
And, as di ALIS system dey, di knowledge lake too fit dey shared openly for places like GitHub.
Dis one go allow plenty people to work together on ALIS system improvements and gather more knowledge, with everybody gaining from di results, wey go even make ALIS development faster.
Of course, sharing knowledge no be only for ALIS developers but fit dey gathered from all software developers wey dey use ALIS.
Di fact say di knowledge dey for natural language get two extra advantages:
Di first advantage na say dem fit use di knowledge even if di LLM model change or dem update am.
Di second advantage na say di plenty knowledge lake wey don gather fit be used as data for LLMs to pre-train. Dem fit do dis in two ways: by using am for fine-tuning, or by using am for LLM pre-training itself.
Anyhow, if dem fit use LLMs wey don naturally learn di knowledge wey gather for di knowledge lake, software development go become even more efficient.
Wetin pass dat, for inside software development, different processes dey like analyzing wetin dem want, designing, implementing, testing, operating, and maintaining, and special knowledge dey for each software area and platform. If dem create a way to divide di plenty knowledge wey gather from dis different views, dem fit form an ALIS orchestra too.
So, all di small small technologies for ALIS don dey. Di main thing now na to practically try different methods—like how to comot knowledge well, how to pick di right knowledge, how to divide special knowledge, and how to use state memory—to find ways wey go work. Also, as e dey become more complex, di time e go take to process and di cost of using LLM go increase, so e go need optimization.
Dem fit do dis trying and testing, and optimization processes adaptively through how dem dey develop and improve frameworks.
For beginning, di developers, as di users, go likely add frameworks into ALIS by trying and testing. But even then, dem fit make di LLM itself generate ideas for frameworks.
And by adding frameworks into ALIS wey dey improve or discover frameworks based on di results wey e get from di world and di knowledge wey e comot, ALIS itself go dey try and test and optimize adaptively.
ALIS for Real Life
Once dem don work on ALIS well well reach dis level, e suppose fit learn knowledge no be only for software development world, but for many different areas too.
Just like software development, ALIS go likely expand to different intellectual activities wey humans dey do with computers.
Even for dis kind pure intellectual activities, ALIS get a kind of "embodied AI" nature (AI wey get body) when it come to di world wey e dey deal with.
Dis na because e sabi di boundary between imself and di world, e dey act on di world through dat boundary, and e fit understand information wey e get from di world.
Wetin we dey normally call "body" na boundary with di world wey you fit see with your eye and e dey for one place.
However, even if di boundary no dey visible and e dey spread out in space, di way of perceiving and acting through a boundary na di same as having a physical body.
For dat sense, ALIS, when e dey do intellectual activities, fit be seen as having di nature of a virtually embodied AI (AI wey get virtual body).
And once dem don refine ALIS reach a stage where e fit learn well even for new, unknown worlds, e get possibility say dem fit join ALIS as part of a real embodied AI wey get physical body.
Dis way, ALIS go eventually dey used for di real world and go begin learn from am.