What is AI?
Last year I taught a computer programming elective at my children’s middle school. The entire time I struggled with AI. Replit, a platform for coding, was my tool of choice, and half-way through the school year they decided to go all-in on AI. Go to the homepage and it asks “what app do you want to make?”. Throughout the platform you can now use AI for everything including small edits. There are so many AI features that its difficult to turn off, so I spent significant time and energy in class making sure that every AI helper was turned off on every student’s account.
Thankfully, it is very easy to decipher code written by AI and code written by 12-year-olds.
But on the last day of class I gave in. Since it’s undeniable that AI is changing the art of programming, I decided to have the students build apps with AI tools so that they were familiar with the future of programming.
The results were astounding. The kids that struggled the most to understand programming concepts suddenly could unleash their creativity to create apps they’d always wanted to. In 30 minutes, one student that struggled with programming the entire year created a 3D game with levels, characters, and the beginning of a storyline.
And he had no idea how any of it worked.
Humans have told stories of talking inanimate objects for millennia. And for the first time in human history this dream has come true, through complex arrays of silicon circuits, software, and super computers. And not only for a select few, but for just about anyone. It’s getting cheaper incredibly quickly and gaining new capabilities all the time.
So what is AI, and how will it affect us?
AI is a Technology
First, AI is a technology. That might sound obvious, but it’s important because all technologies solve some problems, and they create brand new ones. Just like the wheel, or eyeglasses, or internal combustion engines, AI is useful for many human purposes and can make certain aspects of life easier and allows us to do things we never could do before. I am personally seeing this change in professional software development, where full apps are created in days instead of months by those with little experience programming. Proponents of AI in education tout the benefits of generated lesson plans, automated grading advice, and assisting students learn difficult concepts. Stories abound of just how good AI is at these tasks.
But new problems are created by new technologies. The internal combustion engine revolutionized transportation, but also created the environment for automobile accidents, pollution, oil spills, traffic, breakdowns, oil changes, and drunk drivers. In the same way, we are seeing new problems arise with AI. AI makes it much easier to cheat, and harder to get caught doing so. Spam is getting much worse as AI farms crank out hyper-personalized messages. It is harder to trust pictures or videos shown on social media, and easier for hackers to gain access to private data. A proliferation of AI girlfriend or boyfriend apps promise relationships with all the perks and none of the downsides. Stories of clinical grief are arising when an AI significant other is accidentally deleted.
AI is a Simulation
All this leads to my second point. AI is a simulation. This is important because many believe otherwise. It is now common by Silicon Valley elites to refer to a person’s brain as a “biological neural network”, and to talk about a future where there will be silicon-based lifeforms at some point instead of just carbon-based lifeforms. A simulation, by definition, is not the actual thing. The simulation is always derived from something else and it is not its own entity.
Simulations do not experience consequences like the real thing its mimicking. Which is in fact the reason why many things are simulated – for example car crashes are simulated on a computer because they can crash the fake car as many times as they want and learn from their mistakes. When it comes to AI the same is true. A simulated human may seem to have many of the same thoughts, emotions, and reactions as a real human, but it can never experience the weight of the consequences of its decisions. If a person tells an AI to invest their money, and the AI loses it, the person is the one that feels the pain. And even worse if an AI breaks a law, only the people in charge of the AI or the organization that created it can be held accountable. This means that there is a limit as to what AI can oversee, because important and risky objectives will always need a real person at the helm.
AI Cannot Discover Truth
The other thing about simulations is that there is no way for something simulated to know what is real or fake. This is because a simulation always has something else sensing for it. There is a medium in between its senses and the real world. In the case of AI, it is trained only on what its trainers give it. It cannot investigate the real world itself. And even if it could – say if someone invented a robot that can learn like a human – there is always the chance someone could be in the middle modifying the inputs. What this means is that AI cannot discover truth. It has no way to determine whether it’s perceiving the real world, or if there’s someone behind the scenes toying with things. And if an AI cannot discover what is true, then it will always be gullible based on the wishes of whomever is controlling it – for good or for ill.
AI is a powerful tool that will reshape the world. In the coming years it will become even better, break records, and do things thought impossible. At the same time, there are fundamental limits to what it can do. Our objective as educators is to help our students learn the art of discovering the truth for themselves. That will continue to be a task only humans can do.