by Peter A. Arthur-Smith
“Artificial intelligence promises to make life easier, but so far it’s made some jobs a little harder.” A quote from CNBC by Ruigi Chen, Editor at LinkedIn
He/She went on to quote: ‘Tech companies are scrambling to win the AI “rat race,” and the pressure is burning out AI engineers. Employees who spoke anonymously said they’re facing long hours, tight deadlines and shifting goalposts just to keep up with industry leaders.’ This is further reinforced by a New York Times, May 2024 article entitled: ‘A.I’ and ‘Start-Up’ Are a Tough Combination by Cade Metz, Karen Weise and Tripp Mickle. It mentions firm PitchBook, which estimated that $330billion had been poured by investors into around 26,000 A.I./machine learning companies over the past three years.
Some of the better known start-up companies have virtually gone bust, despite their generous funding, such as Inflection AI, Stability AI and Anthropic, as they burn through investors’ money to become established in the field – just look at that prior paragraph multi-thousand venture number. Apparently, generative AI models cost billions to create and maintain.
Some also believe their employers are pushing for AI developments to appease investors without concern for AI’s impact on issues such as climate change and security. (Climate change since AI utilizes way more power than regular industry users and security because many AI watchers are concerned about its ultimate impact on workforce jobs.) “Some are now seeking opportunities outside of AI,” as reported by Ruiqi Chen, Editor at LinkedIn, in an early May 2024 posting.
Then there are already the business aspects of AI, as to whether it will generate enough sales to justify the cost. Open AI has apparently been challenged to broaden its sales, particularly in light of skepticism around AI’s tendency to develop inaccurate answers. Microsoft is apparently playing the long game by investing billions and intends to open an AI lab in London.
Beyond that, you see articles about Ethan Mollick, a professor on entrepreneurship at Penn State University, who has apparently become the go-to expert on AI for the White House, Google, JPMorgan and corporate America. He and his students have apparently been playing with AI for years. In a Wall Street Journal Review article in April 2024, he was quoted as saying that Microsoft’s new co-pilot AI is dangerous because it “automates middle management in the worst possible way.”
And this is where this writer has even greater reservations. In the very near future, I’m proposing to participate in a webinar entitled Will AI make Leaders Obsolete? Its headline panelists will include the CIO at Microsoft and an AI expert from Wharton Business School. This is where things become delicate because, although the two panelists are clearly experts in the AI field, one wonders about their expertise on leadership. If they are leader experts too, they might have had pause for thought on taking on this topic, because it could become a potential minefield.
You see, this writer has come to understand the five fundamentals of leadership as being vision, integrity, courage, humility and wisdom. All of these five factors have a strong human dimension to them, other than wisdom that could be argued is based upon tremendous knowledge – computers and AI will eventually have an abundance of know-how. However, wisdom is also based upon human judgment. Therefore wisdom plus the other four defined leader characteristics are based upon human sentience. Based upon so many expert articles, they have convinced me that AI doesn’t have that sentience even though there are self-serving opinions that disagree. For that reason, the webinar topic question could almost seem like an oxymoron.
Naturally I should wait and see what stance the panelists take. Even then, like Mollick quoted above, I have severe reservations about AI stalwarts pursuing the management/leadership line because they are embarking into an artform realm that non-sentient, robotic equipment ought not to be pursuing. We somehow have to find a way to push back against these AI ventures from blindly pursuing seeming opportunistic product strategies. It wasn’t so long ago that we had IBM’s Big Blue supercomputer extolling its virtues to medical doctors and scientists. It was billed as providing superior medical advice to doctor-subscribers and scientists alike. Many major companies signed up with multi-million $budgets. Most of them withdrew within the last couple of years, because the quality of output just didn’t justify the investment.