by Peter Arthur-Smith
There’s controversy about what these (AI) models are actually doing and some of the anthropomorphic language that is used to describe them,” explained Melanie Mitchell, PhD Professor at Santa Fe Institute, featured in Wall Street Journal Exchange article ‘We Now Know How AI Thinks. It Isn’t Thinking At All!’ by Christopher Mims, April 2025.
Don’t get me wrong, AI as we know it today has great possibilities in dealing with day-to-day efficiency, operational issues and routine tasks, especially as these are already usually well documented – like time- efficiency, accuracy, cost-control and consistent reporting issues. However, its value is much more debatable when it comes to creativity, breakthroughs, longer term strategies, and elegant solutions – as occurs with “effectiveness-think” to optimize our many variables. So as the WSJ article suggests, we need some healthy skepticism to bulwark against getting ahead of ourselves with all the AGI speculative hype.
Increasingly, experts like Professor Mitchell are getting under the AI hood and evaluating what it’s really capable of, notwithstanding all the media hoopla. There have been great expectations that the next generation AGI (Artificial General Intelligence) would become available this year, although the kick-off date keeps slipping. This is likely because experts hoped AGI would show that it “thinks” like humans and therefore can replace many of them and that hasn’t quite materialized so far. Mims’ article is just one of a number of emerging expert viewpoints that pinpoint a gap between human “thinking” and that of machines. So let’s hold our horses for a minute!
Apparently, researchers are increasingly developing tools that can look into AGI (Artificial General Intelligence) models. The more they discover about how AI and AGI do their tricks the greater number of reservations they have. Rather than generating more efficient mental models of various situations and then reasoning through the tasks at hand – as with humans – it seems that AI has developed “bags of heuristics.” A heuristic equates to a problem-solving short cut.
Mims article also explains: “All of this work suggests that, under the hood, today’s AIs are overly complicated, patched-together Rube Goldberg machines full of ad-hoc solutions for answering our prompts.”
So, what do leaders do in the circumstances? They likely need to be wary of any intentions to ditch oodles of people for the sake of AI profits and efficiency. AGI is probably going to fall short at this moment in time, because it doesn’t apparently possess the same reasoning or thinking capabilities as humans. It’s more likely to mimic human brain output rather than produce original human thought; as we know it today. That mimicking capability is already advantageous for performing so many routine and efficiency tasks.
Hence, the more astute students of AI realize that leaders should be “pairing” AI with humans, so the latter can anticipate what the machine needs, whether it’s fit to perform today, and whether its output is being utilized to the greatest advantage. It’s like that bright teenager who can perform many useful family tasks, although benefits from a parent near at hand to ensure he/she has everything needed to succeed and take care of him/herself. Maybe we’ll call this minder an AI-buddy, although our diehard cost-efficiency, do-away-with-people types would prefer that we don’t think that way.
Consequently, going forward, enlightened leadership (EL) can envisage – until looming AGI proves a reliably better story – that we need tiers of humans to support AI/AGI. Level 1 will be that AI-buddy. Level 2 will regularly contemplate how to utilize the AI-buddy duo more “effectively” through regular program revisions and updates, as per today’s programmers. Level 3 will be the business or organization strategists (humans) – with assists from their AGI – that will “think about” future work models and produce the prototypes for their level 1 or 2 colleagues to implement. All the efficiencies gained from this approach will more than pay for the humans involved.
By taking this more realistic and practical approach, we will have a far smoother transition to the AI-people partnership than some AI junkie/opportunist, “cat and mouse, fast profit actors” are envisaging. AI will relieve us of so many repetitive, detail-oriented tasks that it can do so much more effectively-efficiently on a daily basis. AGI may eventually become useful as strategic partners. At the same time, AI/AGI are unlikely to be panaceas and we’ll return to finding ways to better tap into society’s enormous human potential. Potential that we still haven’t been able to fully harness owing to our conventional management-leadership practices.
Because of these current practices we fall short in maximizing human potential, while we are investing $Billions to develop machines to replace them. Just imagine if, instead, we invested those $Billions into properly preparing and educating humans? That could be amazing. It makes the current outcome all that much more disappointing!
____________________________________________________________________
Proposed book So You Want to Lead?, shares a fresh approach toward optimizing human potential through Enlightened Leadership (EL) that can be applied to any type of organization, be that commercial (start-up or large), nonprofit, sports, academia, institutions or government. Since every successful organization relies upon effective leadership, its fundamental tenets can be applied to the known universe of organizations that exist today, as indicated a moment ago.
It has the promise of minimizing our five current organization curses – bureaucracy, hierarchy, efficiency-think, negative-messaging, and different forms of corruption – and replacing them with two-way communication, heterarchy, effectiveness-think, positive-messaging, and greater degrees of integrity. From this stand point, we have the prospect of building more rocket ship cultures versus propeller-driven ones.