This article is part of WorkLife’s artificial intelligence special edition, which breaks through the hype around AI – both traditional and generative – and examines what its role will be in the future of work, for desk-based workers. More from the series →
Artificial intelligence is changing the way we work, for good. But developments are so fast, staying on top of the latest AI lingo may soon become head spinning.
Since ChatGPT hit the mainstream, it seems like different generative AI tools are being introduced daily. To make sense of these new developments, we curated a breakout of some of the most well-used terms, and what they actually mean.
AI ethics: A broad collection of considerations for responsible AI that combines safety, security, human concerns and environmental considerations. It focuses on questions like how AI collects and processes data in a way that might include biases or discriminatory information.
AI ethicists: Companies are increasingly hiring people in these roles, to prevent AI from causing immediate harm. And to ensure that companies are deploying it ethically and responsibly.
Bard: Google’s generative, conversational AI chatbot, announced in Feb. 2023. It is branded as an experiment that “may give inaccurate or inappropriate responses,” according to its website. There is currently a waitlist to access it.
CAIO: Chief AI officer – a new role being introduced by companies who wish to have a C-level executive focus on how to build the company by taking advantage of the rise of AI. It can vary from industry to industry. Experts suspect the role will disappear once AI becomes as regularly used as the internet is.
Chatbot: A computer program designed to simulate conversation with human users. Chatbots often use natural language processing techniques to understand user input and generate appropriate responses.
ChatGPT: A free-to-use natural language processing tool driven by AI technology that answers follow- up questions, admits mistakes, challenges incorrect premises and rejects inappropriate requests. It operates on GPT-3.5. It was released on Nov. 30, 2022 and garnered attention for its detailed responses.
ChatGPT Plus: For $20 a month, subscribers get access to ChatGPT, even during peak times, faster response times, and priority access to new features and improvements. It operates on GPT-4. It was released in Feb. 2023.
CoPilot: While the term is starting to be used to refer to the way humans can work hand in hand with AI, it’s also the brand name for Microsoft’s AI suite of workplace products.
DALL-E: A free-to-use learning model developed by OpenAI to generate digital images from natural language descriptions. In other words, you can write a sentence describing exactly what you want to see and DALL-E will create it in a matter of seconds. It was first released in Jan. 2021, but was upgraded significantly in its second version, DALL-E 2, released in Apr. 2022.
Deep fakes: Synthetic media that manipulates an image to look like someone or something else. There are concerns that this kind of AI can be used to create fake news and misleading videos.
Deep Learning: A subfield of machine learning that uses artificial neural networks with multiple layers to extract and learn intricate patterns from vast amounts of data.
Emergent behavior: Tech speak for saying that some AI models show abilities that weren’t originally intended. Like side effects.
EU AI Act: The European Union’s Artificial Intelligence Act. This is a regulatory framework that is being designed to provide a blueprint for the responsible deployment of AI within organizations. Its goal is to provide a framework and guardrails that employers can use, to ensure that AI is deployed in a way that doesn’t harm customer or worker data privacy rights, much like it did for data privacy with the General Data Protection Regulation. It is still being drafted.
Generative AI: Different from workplace automation, generative AI refers to a category of AI algorithms that generate new outputs based on the data they have been trained on. It produces content that is new and unique and spans text, audio, image and more. It was kickstarted by GPT, but other popular generative AI models include Jasper and GitHub Copilot.
Generative adversarial network: A class of machine learning frameworks, prominent for approaching generative AI. Two networks compete with each other by using deep learning methods to become more accurate in their predictions.
GPT-3.5: A language prediction model that was created by OpenAI. It can answer questions, write essays, summarize long texts, and more. However, the outputs are not always perfect. It can make mistakes, hallucinate and create fake outputs.
GPT-4: A language prediction model that was created by OpenAI. It is its most advanced system, producing safer and more useful responses than its predecessor, thanks to its broader general knowledge and problem solving abilities. It outperforms ChatGPT by scoring in higher approximate percentiles among test takers, including for the bar exam. It’s 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5.
Guardrails: Big tech companies are currently building software and policies to ensure that AI models don’t leak data or produce disturbing content. It is a part of responsible AI.
Hallucinations: When AI literally makes up facts. AI algorithms can confidently generate bizarre outputs that are not consistent with their training data when they don’t know the answer. When there is a gap in the knowledge the AI will go with the most statistically close information. One of the most recent, high-profile examples of this is when a U.S. lawyer recently got into hot water in court, after using ChatGPT to help do background research to make his case. But the court cases the AI surfaced, to further his argument, turned out to be totally bogus.
Harvey: Designed specifically for law firms, Harvey is an AI platform that assists with contract analysis, due diligence, litigation, and regulatory compliance and can help generate insights, recommendations, and predictions based on data. It is backed by OpenAI Startup Fund and is built on OpenAI and Chat GPT technology. PwC announced its global partnership with Harvey in March 2023.
Large Language Model (LLM): A type of deep learning model trained on a large dataset to perform natural language understanding and generation tasks. Bard and GPT-3 are both examples of LLMs. (Google’s refers to its proprietary LLM as LaMDA.)
Local Law 44: A law that prevents New York City employers from using AI, which was enacted in Dec. 2021. Since then, it has gone through multiple iterations. At its core, it will require that a bias audit is conducted on an automated employment decision tool prior to the use of said tool. It will be enforced from Jul. 5, 2023.
Midjourney: A competitor to DALL-E 2, Midjourney also creates AI-generated artwork. It’s free to sign up.
OpenAI: The AI research and deployment company which created ChatGPT and made it free to access for all. It is governed by a nonprofit with a capped-profit model and its mission is to “ensure that artificial general intelligence benefits all of humanity,” according to its website. Its launch of ChatGPT had made it a household name, and its product range also includes Dall-E.
OpenAI API: This provides access to OpenAI’s advanced language models, including GPT-3 and GPT-4. That means developers can integrate natural language processing capabilities into their own applications. Some of the first to do this include Snap, Khan Academy, Duolingo, and Morgan Stanley.
Prompt engineers: People who work specifically on curating the best prompts for maximum results. Experts say that the better you are at prompting GPT, the more likely you are to see success with the outputs. PromptBase is a marketplace where prompt engineers can sell their high-quality prompts that produce the best results.
Prompt library: With the rise of generative AI, companies have begun producing prompt libraries. They hold an assortment of prompts that can be used for ChatGPT and beyond. It helps people who are not familiar with the technology tackle the learning curve and get the most out of the these bots.
Stable Diffusion: A free-to-use, text-to-image diffusion model capable of generating photo-realistic images given any text input. Stability AI is the company behind this image generator.
StableLM: A suite of open-source large language models. On Apr. 19, the company behind it, Stable Diffusion, announced that its models are now available for developers to use and adapt on GitHub. It’s a rival to ChatGPT.
Title VII of the Civil Rights Act of 1964: Protects employees and job applicants from employment discrimination based on race, color, religion, sex and national origin. In May, the Equal Employment Opportunity Commission (EEOC) released a technical assistance document, “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964,” which is meant to help employers understand how to abide by the Civil Rights Act of 1964 as AI is introduced.
Training: When you train AI, you’re teaching it to properly interpret data and learn from it in order to perform a task with accuracy.
Traditional AI: AI has many guises. To distinguish between the current hype around generative AI and the kind of AI that has existed and been used by organizations for years, tech-savvy people tend to refer to the latter as “traditional” AI. It essentially refers to AI that includes functions like detecting patterns, honing analytics, classifying data, and detecting fraud.
TruthGPT: An AI platform set up to challenge the offerings from Microsoft and Google. Billionaire Elon Musk announced it on Apr. 17 and criticized OpenAI of “training the AI to lie.” He said TruthGPT will be a “maximum truth-seeking AI that tries to understand the nature of the universe.” It has not yet been developed, and there is not a proposed launch date yet.