5 Generative AI Myths Debunked GenAI ChatGPT Bard

Ever since the world first got a look at OpenAI’s ChatGPT in late 2022, generative AI (or “GenAI”) has dominated news headlines, board rooms, big tech, and even lifted the stock market. Generative AI is changing everything from cybersecurity, Meta pivoting away from the metaverse to AI, to side hustles.

With all the hype of AI, are we setting ourselves up for disappointment and failure?

Possibly, but there are undoubtedly a lot of myths about generative AI that are misinforming the masses.

What is Generative AI?

Before we jump into debunking myths, lets set a baseline understanding of what generative AI actually is.

Generative AI is a type of artificial intelligence (AI) that can create new content, such as text, images, audio, and video. It does this by learning patterns from existing data and then using this knowledge to generate new and unique outputs. Generative AI is capable of producing highly realistic and complex content that mimics human creativity, making it a valuable tool for many industries such as gaming, entertainment, and product design.

Some of the most popular use cases for GenAI include:

  • Image generation or manipulation: Create realistic images from text descriptions or prompts. This is being used to develop new forms of art, create virtual worlds, and improve the quality of existing photos. Midjourney has taken the internet by storm with creating breathtaking images simply from text prompts. Adobe Sensei has similar capabilities especially for editing existing imagery.
  • Text generation: Large language models (LLMs) can be used to create realistic text, such as news articles, blog posts, and even creative writing. This technology is being used to generate marketing content, create chatbots, and improve the quality of machine translation. Google Bard, Bing Chat, and ChatGPT are the most popular text chatbots using large language models.
  • Audio generation: Using to AI to create realistic audio, such as music, voice recordings, and even sound effects. This technology is being used to develop new forms of music, create virtual assistants, and improve the quality of video games. Text-to-speech AI models, and Meta’s Audiocraft are examples.
  • Video generation: You can even create realistic videos from text descriptions or prompts. This technology is being used to develop new forms of entertainment, create training materials, or compliment news articles with videos. This can even include inserting an AI avatar or person, such as for a company HR or marketing video. Popular platforms include Synthesys, Invideo, or Colossyan.
Wall Street Journal personal tech reporter Ann-Marie Alcántara joins host Zoe Thomas to discuss how popular generative AI apps like Lensa, ChatGPT, and other large language model (LLM) programs work and the concerns that have been raised about their potential misuse. (Source: The Wall Street Journal)

5 Generative AI Myths Debunked

  1. No single model to “rule them all”
    Some models are good at summarization, others are good at bulleted lists, others are good at reasoning, and so forth. Industries, lines of business, and companies have very different editorial tones for expression of knowledge. All these should be considered when choosing your models.

    In fact, companies such as Google are taking the PaLM 2 large language model and customizing it for specific industry verticals, like cybersecurity and healthcare.
  2. Bigger is better
    Generative AI models consume large amounts of computing resources. The large funding rounds required for companies creating foundation models is just one testament to these costs. Potentially high compute costs are one reason why using the right model for the job is so important. The larger the model, the more it costs to query.

    This is why creating smaller, “nimble AI” models – defined as under 15 billion parameters – are considered more cost-effective and just as efficient for domain expertise as larger, GPT-4 modals with over 100 billion parameters.
  3. My data is private
    Leaking proprietary company data through publicly accessible chatbot tools like Bard or ChatGPT is so concerning that many companies such as Amazon, Apple, Verizon, and Goldman Sachs have banned access. The list keeps growing.

    Some public generative AI services may leverage user data for future training sets, potentially exposing proprietary data. Let’s say a tech company is exploring an acquisition of a strategic company, and someone in the M&A department queries a public model, asking, “What are some good takeover targets for the semiconductor industry?”

    If that information contributes to the model’s training data, the service could become trained to answer this question for anyone.

    In August, 2023 updated policies for Zoom enterprise customers establishes Zoom’s right to utilize some aspects of customer data for training and tuning its AI, or machine-learning models with consent. Enabling meeting summarization or suggest chat messages using its generative AI models forces customers to sign an agreement to consent to training its models, according to CNBC and confirmed in an updated Zoom blog post.

    Make sure you clearly understand the terms and conditions of any AI or generative AI service you use – whether you’re an enterprise customer or a public, consumer account.
  4. The responses are always factually correct
    Have you ever asked a chatbot a question that you know is factually wrong but it delivers the answer with complete confidence, stating it as if it were true? This is called a hallucination. And when it’s not so obvious that the answer is incorrect, this can lead to potential problems.

    Imagine if a chatbot is being manipulated to produce misinformation/disinformation, or phishing campaign emails. What if a chatbot suggests the wrong medicine to treat a symptom, or a configuration for a firewall policy that affects network availability?

    Sam Altman, CEO of OpenAI, thinks that hallucinations will get under control within a couple of years for OpenAI services. “It’s a balance of creativity and perfect accuracy,” he said to an Associated Press interview.
  5. You can ask any question
    Why build multiple chatbot and models when you can have one model to rule them all, and answer all questions? This is the approach most business leaders and customers take until they begin researching how generative AI and large language models work.

    Most likely, you will not want to build a single, all-encompassing AI chatbot capable of answering any question about your organization. This would be too expensive, inefficient, and also poses a security concern.

    Instead, you’ll want to utilize a tailored generative AI model for specific areas of business functions. It can also help control someone accessing information about a portion of the business they shouldn’t have access to know.

    Implementing Identity and Access Management (IAM) and user authentication to enterprise-grade chatbots is already available with cloud service providers like Google, Azure, and AWS, and is considered best practice. Organizations should clearly be able to audit and log who is asking what questions to a chatbot, and what data they are receiving as a response.

Disclaimer: The author of this article is a current employee of Google. This article does not represent the views or opinions of his employer and is not meant to be an official statement for Google, or Google Cloud.

You May Also Like

Crowdstrike introduces Charlotte AI, a Generative AI Cybersecurity Analyst

CrowdStrike, a leading cybersecurity company, has announced the launch of Charlotte AI,…

S3crets Scanner scans AWS S3 buckets for secrets

A new open source tool ‘S3crets Scanner’ scans Amazon Web Services (AWS)…