(story from a stupid guy)
Artificial Intelligence (AI) and especially deep learning are cornerstones of our augmented future. Lots has been written about AI vs humans. One approach is to compare hemispheres of our brain to a deep learning engine for the instinctive part, and a traditional (procedural) computing engine for the reasoning part of our brain. Such hypothesis has even been popularized in Dan Brown’s novel “Origin” back in 2017.
At the same time, I was reading this book my life experience built a strong belief about AI: our learning process, and in turn how we react in case of unknown situations, is very close to the behaviour of a deep learning powered computer program, and not necessarily the smartest one.
Learning to fly is quite similar to learning to drive. You repeat actions until they become a natural behaviour, a reflex. You repeat the pattern again and again and again. Take-off, 300 feet flaps up, pump off, 500ft turn in climb at climb speed, 1000ft reduce to pattern hold speed, enter downwind, pump on, flaps down, landing lights on, turn in base and reduce speed for 400ft/min negative vario (a smooth descent rate), turn final, full flaps, final approach speed.
What you don’t realise while learning this is what’s happening inside your brain. Each action you do, each view of your instrument panel, each view on the runway is an input to your cognititive system. You learn sequences of actions which are themselves triggered by a succession of events. But wait, there is more to learn.
Later you might have the chance to fly on a high-performance plane with a retractable gear and a constant speed propeller. Things get a bit more complex. Take off, positive climb checked, gear up, flaps up, pump off, reduce throttle, reduce prop speed, turn in climb, 1000ft reduce throttle to cruise pressure, set prop speed to cruise, turn downwind, pump ON, reduce throttle, set propeller speed to go-around, wait for flaps speed, flaps down, set throttle to pattern hold pressure, check speed, gear down, check gear down(all three lights green), landing lights ON, open cowl flags if equipped to limit engine temperature in case of go around, turn base, reduce throttle, turn final, check propeller speed, throttle, pump, gear, light…
OK, that looks pretty obvious. After all the plane has a retractable gear, so you just have to drop it before landing if you aren’t stupid. After all the plane has a constant speed propeller, so you should set a small pitch (like a first gear on a car) so that you have plenty of power available in case of a problem requiring you to cancel the landing and climb back to pattern altitude. After all the plane has flaps to reduce the approach speed so you should set them for a proper landing where you won’t need more than the runway length due to overspeed. After all the plane has landing lights to be seen (also) and they should be on so that planes waiting to enter the runway can better see you.
Pretty obvious, isn’t it? There is no rocket science. There aren’t so many systems, and they all have a clearly defined function. Obviously who would not understand that the landing gear is for … landing ?
In fact, nothing is obvious here. Something is totally wrong in this reasoning. And guess what? A condition is missing. And this condition is written above.
A few lines above the condition is given: you just have to drop the gear if you’re not stupid.
I’m sorry to have to say that, but the problem is the following: you are stupid. At least I am. Just not in all situations.
Flying or driving are not reasoning, just pure learning. Our brain is much too slow to enable reasoning in high workload situations. All preparatory actions occur downwind, when the plane is flying parallel to the runway before turning twice and descending to align with the runway. They are usually executed while the plane is beside the runway, so during a journey length of 1000 meters (3000 feet). At 85kts that’s a bit more than 25 seconds. There are 7 actions to execute and to monitor, so 14 individual actions or checks. So we are left with 2 seconds per action.
How long does it take to your brain to build and execute an algorithm, to recall an item from your memory, to make sure that’s in line with what you have learned? It takes much more than 2 seconds. That’s much shorter than the capabilities of the conscious, reasoning part of our brain. The only solution is to use the deep learning part of it, with the same strengths and weaknesses as any deep learning algorithm.
As a matter of fact, any good flight instructor is able to put any pilot into a situation that will fail his learning algorithm, resulting in a gear-up landing. There is even a popular saying that there are only two kinds of pilots flying retractable planes: those who had a gear-up landing, and those who will.
Let’s repeat the downwind section of the pattern with such an instructor:
- Instructor: pull down one notch of flaps
- Pilot (thinking): I should slow down the plane first, why is he telling me to drop the flaps? I’m too fast, let’s check. Yes, I’m too fast, let’s slow down the plane
- Instructor (yelling): Pump on before reduction
- Pilot (thinking): ok, easy, pump on, and let’s anticipate and execute the other actions
- Instructor: Plane at 2 o’clock, converging!
- Pilot (looking around, thinking): no plane…
- Instructor: that was a wrong information from the tower
- Pilot : …
- Instructor: low cloud in front of you, larger plane wanting to align on the runway, shorten pattern, hurry up! Don’t forget landing light, they must see us!
- Pilot: …
- Instructor: Turn base, turn base, reduce throttle, turn at 500ft, watch your vario, check fuel pressure, needle is wrong
- Pilot (thinking): ok landing light is on, they will see us
- Instructor: Turn final, watch the plane, watch fuel pressure, signal your position, watch ultralight heading 370, might overshoot his final turn
- Pilot: … ok speed controlled, engine fine, pressure fine
- Instructor: signal your position!
- Pilot: Killfields tower from F-DEEP, final 23
- Instructor (with a smile): watch your flare, you will need a perfect one
- Pilot: ???
- Instructor: Killfields tower from F-DEEP, go-around 23 following a gear-up landing exercise
- Pilot: ???
- Instructor: you just crashed the plane, didn’t you hear the gear-up alert signal?
Here is what just happened: the instructor generated events that forced the pilot to interrupt his sequence of actions. He also overloaded the pilot with information and instructions. The interruptions broke the deep learning automatic – unconscious – reaction as the chain of events has no similarities with the ones learned, breaking the search path to the reference case stored in the brain. The overload prevented the brain to switch to a reasoning mode, forcing the use of the learning pattern… which didn’t work for the reason mentioned.
This is called a mental tunnel. Actions described here are slightly different from reality: but the principle is always the same. One might say that in real life there is no instructor giving false information. That might be true, but reality can also be worse, with wrong information from the tower (anybody can fail) or from the instruments, or both. I personally experienced such a situation of mental tunnel where only an external event could reboot your brain.
Situations of mental tunnel are well known in aviation and are often part of the chain of causes that lead to major crashes. As a computer scientist, this kind of situation made be understand how stupid we can become, and how close our learning mechanisms are to deep learning systems. Any false or unknown entry can even put us in a loop situation where the brain has no solution anymore. The only way to exit such a situation is a “reset” from an external help. Our brain can totally fail in finding a solution, just like a deep learning algorithm placed in a situation where it cannot converge.
Deep learning stacks up multiple layers of learning algorithms that are purely based on data. There is no reasoning such as “The plane needs to have the gear down for landing, so I should drop the gear”. It works more like “I am in downwind my speed is 85mph, I have done this and that actions, I drop the gear, I land without crashing: this is fine” or “I am in downwind my speed is 85mph, I have done this and that actions, I do not drop the gear, I crash the plane: this is not fine”. The several layers will work on different levels of information and will produce series of micro-predictions in the form of a success percentage. Just like humans, insufficient training will lead to a bad decision, and just like humans, proper training will lead to a good decision. But unlike pilots, a deep learning system can be trained on a huge set of data representing not only the situations a pilot may experience in his/her life, but situations of many pilots and instructors. Unlike humans, deep learning is able to learn from other’s mistakes.
There is a big debate about AI and how far we are in this journey. Guess what? Since those experiences I’m part of those who think AI is close to a much bigger future. Be honest and on a busy day just count the percentage of actions you just did without consciously thinking you were processing them… you’ll be surprised.
The image illustrating this article has been designed by our UX/UI expert Laurent Untereiner using a mix of artificial intelligence and human work: the runway, clouds and background have been generated with artificial intelligence using Midjourney AI.
About Francois Vaille
Formerly IT Manager, CIO, Managing partner in an IT company, Start-Up founder, François is an achiever, innovator, intrapreneur and entrepreneur. He helps Sogeti’s customer in their digital transformation thanks to his helicopter vision, from C Level to coders.
More on Francois Vaille.