There is hot global debate —maybe even an arms race — about Open AI and the blockbuster San Francisco-based start-up app ChatGPT.
Is it the harbinger of humanity’s doom as predicted by Elon Musk and Stephen Hawking in 2017 or a powerful, digital tool that will significantly improve productivity, recruiting, and scientific research once its ethical issues are safely resolved? Will it significantly reduce white collar jobs?
The AI chatbot can almost instantly generate paragraphs of human-like, fluid text in answer to basically any prompt you can come up. It can also write resumes and college papers, answer customer and employee questions, match applicants to jobs, and overcome coding issues.
But it does have limitations. According to CNBC, don’t rely on it to do your math homework correctly, or provide an accurate substitute for researched writing. The chatbot has become the fastest growing consumer-facing application in history, according to a new analysis from the Swiss investment bank, UBS, as reported by many financial outlets.
According to Gaurav Gupta, a partner at Lightspeed Venture Partners, “Most AI in the last couple of decades has really been around analyzing existing data. Generative (or Open AI) is very different. It allows you to create brand new content. That content can be text like a news article or poetry or marketing copy, a website. It could be video. It could even be audio, like creating brand new music.” However, Gupta warns that Open Ai has its limits for now: “Business areas that require a high degree of accuracy and human judgment are simply not suitable for ChatGPT. The technology might be most useful for automating repetitive tasks within sales and marketing teams: “It could replace a junior salesperson who is prospecting, or a customer service rep that responds to questions,” he said.
Will AI really generate new music like the work of Mozart, Bach, Johnny Hodges, Charles Mingus, and The Beatles? I doubt it, but it already stands accused of plagiarism by artists and musicians.
By the way, Elon Musk was an investor of Chat GBT but cut his ties three years after it started over a dispute about its control and direction. According to The Wall Street Journal, Microsoft Corp., which has invested billions of dollars in OpenAI, in March said it was integrating ChatGPT into its own enterprise software products, and more recently said it would add the technology to Bing, Microsoft’s search engine. Google has called in Larry Page and Sergey Brin to help fight back the combination of Bing and ChatGPT and stay on top of the economic AI arms race for digital search.
Will Open AI significantly reduce jobs?
A paper published by Cornell University on March 17 looked at the impact of Open AI GPTs (or what they call Large Language Models (LLMs)) on the workforce. The believe the impact is significant, but the timeline is hard to predict. Most high-paying white-collar workers will find Open AI doing the most common aspects of their fields. They predict that those in “critical thinking” and science will see much less impact.
The researchers write:
Our findings reveal that around 80 percent of the U.S. workforce could have at least 10 percent of their work tasks affected by the introduction of LLMs, while approximately 19 percent of workers may see at least 50 percent of their tasks impacted. We do not make predictions about the development or adoption timeline of such LLMs. The projected effects span all wage levels, with higher-income jobs potentially facing greater exposure to LLM capabilities and LLM-powered software. Significantly, these impacts are not restricted to industries with higher recent productivity growth.
Our analysis suggests that, with access to an LLM, about 15 percent of all worker tasks in the US could be completed significantly faster at the same level of quality. When incorporating software and tooling built on top of LLMs, this share increases to between 47 and 56 percent of all tasks. This finding implies that LLM-powered software will have a substantial effect on scaling the economic impacts of the underlying models. We conclude that LLMs such as GPTs exhibit traits of general-purpose technologies, indicating that they could have considerable economic, social, and policy implications.
This sounds game-changing to me. However, history shows that the pace of implementing such technology may take a decade.
One of my software engineering friends uses ChatGPT to solve difficult coding challenges and was disappointed to learn that Samsung employees are in hot water after they reportedly leaked sensitive confidential company information to ChatGPT on at least three separate occasions. According to GIZMODO, the leaks highlight both the widespread popularity of the popular new AI chatbot for professionals and the often-overlooked ability of OpenAI to suck up sensitive data from its millions of users.
Of course, university professors and high school teachers are having fits with papers submitted by their students who used the app. What will happen to ethics and human beings’ capability to think and create?
This reminds me of the debate between Elon Musk and Mark Zuckerberg in 2017 over AI and if it could reach the level that it thinks on its own faster and more cleverly than humans and takes over. This has been the plot of movies — such as 2001: A Space Odyssey, Terminator and The Matrix — for decades.
Back in 2017 according to USA Today, Elon Musk, then CEO of Tesla and Space X told the National Governors Association in 2017 that his exposure to AI technology suggests it poses “a fundamental risk to the existence of human civilization.” Facebook founder Mark Zuckerberg parried such doomsday talk “the worst event in the history of civilization” — with a video post calling such negative talk “pretty irresponsible.”
Cosmologist Stephen Hawking agreed with Elon Musk. The debate continues.
Upset Artists, Ethics and Regulation
According to GIZMODO, the artist community has been particularly outraged at the fact their work has been used to train generative AI models, such as OpenAI’s DALL-E 2, StabilityAI’s Stable Diffusion, and Midjourny. This week, Spawning.ai, which runs the site haveibeentrained.com, said that requests to remove artwork from AI datasets have resulted in 78 million artworks being opted out for AI training. StabilityAI, ArtStation, and Shutterstock have promised to abide by these opt-out requests, but that doesn’t mean other companies or large datasets will.
And there’s also a question of how major tech companies plan to use websites and users’ own data for the purpose of training AI. OpenAI and its founder Sam Altman have promised not to use companies’ data when they purchase the new ChatGPT API, but regular users can still expect that any of their information put into the AI prompt will be used for training purposes. Without any kind of digital privacy law, GIZMODO writer Kyle Barr writes, “We should only expect users’ data will be used to train AI, for good or ill.”
Regulation will race to catch up to Open AI to prevent discrimination and protect Intellectual Property and privacy.
The U.S. Chamber of Commerce, the largest pro-business lobbying group in the country, released a report on generative artificial intelligence in April, calling on lawmakers to create some sort of regulation around the ballooning technology. However, their report was short on specifics.
According to Rueters, the Biden administration said Tuesday, April 11, it is seeking public comments on potential accountability measures for artificial intelligence (AI) systems as questions loom about its impact on national security and education.
European legislators are adding new provisions to regulate Open AI. China’s top internet regulator has issued proposed rules that would require companies to undergo a government security review—including to ensure the outputs don’t subvert state power, incite secession or disrupt social order—before opening up their generative AI services.
Can Open AI avoid the pitfalls of previous digital face recognition technologies used to improve recruiting?
Earlier attempts to use digital facial recognition software to read facial expressions, body posture, and tones of voice to make recommendations for whom to hire struggled to pass reliability measures, especially when decisions involved women and people of color.
Until these issues are sorted out, my advice to Human Resources and Recruiting leaders is to keep Open AI in the safe zone of answering questions, finding talent more quickly, and matching talent to jobs. However, any technology making predictions on who to hire and skill sets needs to be statistically validated to prove it does not discriminate. Compliance issues by state and country need to be monitored to be sure confidential and personal information is not misused or compromised.
Finally, a human touch is always useful in building relationships and closing a deal, and I never want to be on a spaceship with a computer named Hal.
Will Open AI by humanities doom or a godsend for productivity, recruiting, and scientific research? What do you think?
Victor Assad is the CEO of Victor Assad Strategic Human Resources Consulting and managing partner of InnovationOne, LLC. He works with organizations to transform HR and recruiting, implement remote work, and develop extraordinary leaders, teams, and innovation cultures. He is the author of the highly acclaimed book, Hack Recruiting: the Best of Empirical Research, Method and Process, and Digitization. He is quoted in business journals such as The Wall Street Journal, Workforce Management, and CEO Magazine. Subscribe to his weekly blogs at http://www.VictorHRConsultant.com.