It is no surprise that tools like ChatGPT and Open AI are trending like wildfire based on the very same tools telling me that news in a non-biased way. I will admit that although and throughout the years I have seen many things come by that may have felt like we were on the edge of a new era of a Robot Utopia, from the advent of Clippy ™ to Alexa, Cortana and Siri making eerily accurate recommendations, this time it feels very different.
Firstly, I want to point out that the cases I’m discussing where AI “helper” tools will be used are not the same cases where we’d use content creation tools, which is certainly relevant, but now we are delving into the actual “instructions” for the systems to perform the tasks we’re trying to accomplish, a task we have been putting off for a while. We might even say that the recent rise of Low Code is a direct result of this as the systems have become “smart” enough to get us going more (and not just most) of the way through and with less power and knowledge… sometimes on purpose, it abstracts the details we shouldn’t care about in the Cloud Architecture patterns Nirvana path, and sometimes by accident since the whole point was to get us there, but we expected that the human would perform the due diligence necessary to … check their work! Self-Driving cars might be an example where the assumption is that the human driver would handle any exceptions or nuance because they are expected to pay attention!
The solution review process for me has definitely become more interesting as I have to think through my assumptions about the bits under the covers and whether they are exactly what I have come to expect or if something has fundamentally changed without notice. When it comes to open source projects or public libraries, it was a matter of trust and transparency as well as due diligence and testing in well-defined Application Life Cycle Management or DevOps pipelines. Currently, so much is abstracted in a way that goes far beyond subtle changes to libraries beneath libraries. If trust if not already at the forefront of any solution that has a direct impact on someone’s or health or livelihood, it should. The fact that this has been true since just before the pandemic was certainly a sobering realization for me and a reminder that even with all the accelerators and convenience of Low Code Platform, I cannot let my guard down for a second longer than I may have already done.
Okay, that certainly sounded dramatic from moral perspective but what about the little things that surely wouldn’t stop the world in its tracks as the price for a quick win? So far, it doesn’t appear that we are training nuances into our robot masters by using Code Assist (Git Hub) or Power Query FX (Power Apps/Automate and Dataverse) or AI Modeler (Open AI implementation since 2019. In Microsoft’s Power Platform, even with the feedback loops built into those systems as well as recommended practices such as those analytics provided by the No Code Power Virtual Agents feature (and yes, there are equivalents in the non-Microsoft world.)
In terms of how we learn how to use these tools, I noticed how many PowerPoint slides I had to change just for doing the training on those tools, which relied mostly on step-by-step notes instructions intended to help the student learn. If that same user can ask ChatGPT to “create code” in the low Code environment of choice to do something specific along with explanations, did that same user need to develop any muscle memory at all especially if the “answer” and integrating the answer comes very quickly or seamlessly such as the “tell Power automate what you want to do” preview feature that literally creates a near complete Flow solution from a straight forward sentence? What if the answer is wrong (and who decides that it is, if not when can a second opinion be provided?)
For now, if you are focusing on training or learning tools, it is time to go back to the basics to see what needs to be added that are a product of this “new tech” if only to provide context, or at the very least to give the human advisor advise on evaluate what the AI is providing a.k.a. GUIDANCE. We would be remiss if we just let this slide through our organizations without some accompaniment at the very least since it absolutely alters how we learn and use the tools.
You are always welcome to join in the discussion, and feel free to use your free ChatGPT account to get your thoughts flowing.
About Ralph Rivas
I am a seasoned professional with nearly two decades of experience delivering quality software and solutions. I am currently with Sogeti in the corporate Applications and Cloud Technologies group as a National Solutions Architect focused on the Power Platform and M365 ecosystem where I am actively promoting and growing the technologies and features to help customers make the most of their digital modernization journey.
More on Ralph Rivas.