Skip to main content

Blog

Stay updated with our new news!

Two Can Keep a Secret, If One of Them Is Not ChatGPT
48e5f74ef106437d18aaaadfddcbeacc?s=96&d=mm&r=g
Maraam Jurnazi | 19/03/2025 |
blog en ai 57e88057

1. Introduction: Generative AI—Your Digital Work-Bestie 

I’m no magician, but if you're reading this article on a laptop or computer, I can assure you that you probably have multiple tabs open, and, chances are, one of those is a generative AI tool.

Generative AI, more formally known as Generative Pre-trained Transformers (GPT), is a type of Artificial intelligence capable of processing prompt text as input and generating responses based on that input. These responses can range from text, like emails or reports, to visuals or even videos with more advanced models.

While many of us are amazed by the fascinating capabilities of these new tools, one critical ingredient often gets overlooked: regulations. With the EU's AI Act on the horizon and increasing government scrutiny worldwide, understanding and managing AI risks is no longer optional—it's becoming a legal imperative.

In this article, we'll focus on ChatGPT, a widely used generative AI tool developed by OpenAI that has gained tremendous popularity across various industries.

2. How Does ChatGPT Get It Right? Or… Does It?

Have you ever wondered how ChatGPT handles your work faster than you, despite your years of experience? AI tools like ChatGPT are like cars, and they rely on models that act as their “engines.” These engines are trained using vast datasets and machine learning techniques, enabling them to recognize patterns in your input and generate probability-based responses.

When I say “probability,” I mean that AI predicts the most likely continuation of a sentence based on past patterns. For example, if you start a sentence with “Yesterday, I bought a T-shirt from a …”, ChatGPT is more likely to finish it with “clothing store” than “butcher.” The reason? It has been thought that T-shirts are generally associated with clothing stores.

This pattern recognition doesn't stop at simple phrases—it extends to general language. Think of it as a highly diligent student absorbing every piece of information—relevant or not. AI models are trained on massive datasets scraped from the internet, books, code repositories, and user interactions. These datasets can include everything from public websites and open-source code to digitized books and user-generated content.

AD 4nXeikfZ5zo7oePewClzRIP66DHSfcBFNq xF9nAzyq4p1rbSECjpBsMCOEtGJB4mXc oxC8xOWUgsXp b3P11KOZcmdsgZw18hVfD w0tBRGf3LeJYP3HMhrOV6WMfRGHjAovERYWg?key=yqPqbu9r nAUrDKrPQgCXXw
Figure 1. Example of BERT AI Model Probability Predictions.Source: AI Explorables

But What Happens When We Chat with ChatGPT?

While these datasets allow generative AI bots to create impressive results, it also means that any sensitive information you input could inadvertently become part of the model’s learning experience, potentially exposing it to others or influencing future responses in unexpected ways.

The downside? Any sensitive data you share—personal or related to your organization—could be used to enhance the model, raising concerns about data security and confidentiality.

3. The Samsung Case: A Lesson in AI Security

This is precisely what happened at Samsung. One of their engineers used ChatGPT to review semiconductor source code by inputting it as a prompt. The result? A significant data exposure incident. Sensitive company data was inadvertently exposed, leading Samsung to implement stricter AI policies, including a 1024-byte input limit – To put this into perspective, 1024 bytes is roughly the length of a short paragraph, barely enough for a detailed client email, let alone reviewing complex code – which is, without saying, not enough!

But that wasn’t the only issue. Two other internal incidents followed, one involving an employee using ChatGPT to generate meeting minutes from an internal discussion and another where an employee uploaded sensitive program code for optimization. These incidents led the company to consider banning generative AI tools in May 2023.

In response to these concerns, Samsung announced plans to develop its own AI tools for internal use, illustrating the importance of keeping sensitive data within secure boundaries.

4. The Key to Smart AI Use: Be Proactive, Not Reactive

So, how can we avoid the pitfalls of misuse? The answer is more straightforward than you might think.

For Decision Makers

If your team members are already using generative AI tools like ChatGPT to help them in their daily work tasks (and according to the Data and AI Trends Report of 2024, 84% of people believe generative AI can help organizations access insights faster, so you're not alone), it’s time to establish a clear AI usage policy.

An AI usage policy is crucial in establishing accountability and eliminating legal risks related to AI. While it doesn't guarantee success in legal disputes, it provides a clear framework for responsible AI usage and evidence of intent in case of a breach.

An effective AI usage policy should address the following key areas:

Defining AI Tools

The first step towards a successful policy is defining which AI tools are approved for use within an organization to accomplish specific work tasks. Usage permissions can be limited to particular AI tools that do not store user data within the tools’ memory.

 From a user’s perspective, AI tools generally fall into consumer and enterprise tools. Enterprise-grade AI tools, such as ChatGPT Enterprise and Claude for Teams, typically enforce stricter security policies, prioritizing privacy by ensuring that prompts (user inputs) are not stored. Before adopting any AI tool, it is essential to review its terms of use to understand its data-handling practices.

Defining AI Tasks

Once the approved tools are selected, defining the specific tasks that AI can perform within the organization is essential. These tasks should be aligned with business objectives while also considering ethical boundaries. AI can be used for data analysis, content creation, customer support, and process automation. Still, decision-making involving sensitive data, such as legal, financial, or medical advice within the organization, should be avoided.

Monitoring Process

The monitoring phase begins once the permitted tools and tasks are clearly defined. It is crucial to track AI usage regularly to ensure full compliance with the policy. For example, administrators can monitor which tasks AI is used for and whether any tools are being misused. Enterprise AI tools often offer admin accounts that allow administrators to monitor AI interactions, align with policies, and restrict unauthorized access. This centralized control makes tracking more efficient and seamless, ensuring that AI is used appropriately.

Incident Response

Despite preventive measures, incidents can still occur. This makes establishing a clear protocol for responding to AI-related data exposure incidents or ethical violations essential. A solid incident response plan will help mitigate any risks associated with AI misuse. For further insights, refer to the National Institute of Standards and Technology (NIST) AI Risk Management Framework in Arabic and English.

While creating and enforcing such a policy might take some time, it’s far less costly than dealing with the aftermath of a security incident.

The goal is to be proactive, not reactive. Consider launching an internal “Honest AI” project that brings stakeholders together to discuss what sensitive data should remain protected in corporate work. Understanding and processing this data will lead to scenarios where AI usage at work is not the best case, which will be your policy's starting point.

Gartner’s AI in organizations survey for the year 2023 found that only 53% of organizations have moved from AI experimentation to embedding it within their business operations. This number's excitement clearly shows that organizations have not yet matured their data management and governance practices, and we invite you and your business to be a starting point in this.

For Employees

Let’s face it—AI tools like ChatGPT can significantly enhance our efficiency and creativity. Instead of being hesitant, let’s embrace AI as a valuable part of our workflow, integrating it ethically and responsibly!

Next time AI helps you rephrase a report, draft content, or generate insights, don’t hesitate to acknowledge its role. For instance, you can say:

“AI tools for research, content structuring, and drafting assisted this document. However, all information has been carefully reviewed, edited, and verified by [Author Name/Team Name] to ensure accuracy and reliability.”

This approach encourages responsible AI usage and sets a positive example for your colleagues.

Are you using special AI tools to analyze data and discover valuable insights? Share it during your next meeting! Doing so fosters transparency and shows how AI can be a great collaborative tool when used responsibly.

Another good feature of Chat-Based AI Tools like ChatGPT is the “Temporary chat” option. Activated, this feature allows you to opt out of providing your input as training data to the model, giving you more control over your privacy and how your data is used.

To summarize, before using AI for any task, ask yourself these questions:

  • Does my company's AI policy cover this task? If you're unsure, clarify with your manager.
  • Am I sharing any sensitive data? If so, ensure it’s absolutely necessary and take steps to anonymize it when possible.

Am I treating AI output as a starting point or the final answer?  Be cautious of “hallucinations”—AI tools like ChatGPT can occasionally generate information that sounds plausible but may not be accurate. When in doubt, verify and double-check the information.

5. Conclusion 

As we navigate the evolving landscape of generative AI, it’s essential to approach its integration with caution and responsibility. AI tools like ChatGPT can be powerful allies in enhancing productivity and creativity, but they also come with crucial considerations—especially regarding data security and ethical use

Remember what Benjamin Franklin said: “Two can keep a secret if one is dead.” But no one has to be dead to keep a secret—just be careful with AI!

Share:

FacebookTwitterLinkedInWhatsAppTelegramViberCopy Link
Leave a Reply

Your email address will not be published. Required fields are marked *