Bias-Free AI? Meet the Young Entrepreneur Taking On OpenAI and China’s DeepSeek

Meet the Young Founder Fighting Bias in AI

Cyril Gorlla’s Vision: AI That Reflects People, Not Power

Indian-origin entrepreneur Cyril Gorlla, just 23 years old, is on a mission to make AI fair, safe, and aligned with human values. As co-founder and CEO of CTGT, he believes AI is heading toward total integration into society—and we must act now to shape it responsibly.

“AI will soon guide governments, influence society, and drive business decisions,” Gorlla said in an interview. “We must embed the right values into AI from the very beginning.”


The Problem With AI Bias: More Dangerous Than It Seems

Long-Term Risks Often Ignored

Gorlla warns that bias in AI isn't just a short-term glitch. Imagine governments using biased AI to make public policy. “The effects years down the line could unfairly hurt communities,” he explained. "We're still in the early days of AI—just like the internet in the '90s."


What CTGT Is Building: Trustworthy AI That Adapts in Real Time

A Revolutionary Method to Remove AI Bias

CTGT, co-founded by Gorlla and Trevor Tuttle (CTO), recently raised $7.2 million in seed funding led by Google’s Gradient Ventures. Other backers include Y Combinator, General Catalyst, and well-known angel investors.

Their breakthrough? A mathematical method to remove censorship and bias at the core of AI models—without retraining them.


How It Works: No Retraining, Just Results

Instead of rewriting the entire model, CTGT identifies and isolates internal features that cause bias. Then, they modify these features to ensure fairer outputs. This method is:

  • 500x faster than traditional fine-tuning
  • 100% successful in removing bias and censorship in testing

They tested it on DeepSeek R1, a Chinese AI model known for its censorship, especially on topics like Tiananmen Square. After CTGT’s intervention, the model responded freely and fully—without needing a complete overhaul.


Battling Censorship in Global AI Models

Case Study: DeepSeek R1

When DeepSeek launched, it drew criticism for refusing to answer politically sensitive questions. CTGT took this as a challenge.

“We analyzed which parts of the model shut down during sensitive queries,” Gorlla said. “The model knew the answers but was suppressing them.” By reducing the influence of those censorship features, the model became transparent without retraining.

Before CTGT’s fix, DeepSeek answered only 32% of sensitive prompts. After? Almost 100%.


Hallucination in AI: CTGT Has a Fix for That Too

Making AI Stop “Lying”

CTGT’s method also addresses AI hallucinations—when models generate factually wrong or absurd results (like suggesting glue as a pizza topping).

“Most companies rely on prompt engineering to stop hallucinations,” said Gorlla. “That limits performance. We instead identify the features causing hallucinations and suppress them mathematically. The result: smarter AI without the nonsense.”


Real-World Impact: From Cybersecurity to Healthcare

Personalised AI That Understands Your Values

CTGT works with clients in cybersecurity and healthcare, helping them train AI that’s not just smart—but human-aligned. For example:

  • A cybersecurity firm used CTGT to embed internal documents into its AI.
  • A healthcare provider used the platform to improve doctor-patient communication.

“These are not benchmark scores,” said Gorlla. “They’re deeply human interactions, and our AI respects that.”


AI vs Humans: Replacement or Amplification?

Not All Jobs Are at Risk

When asked if AI will take over jobs, Gorlla had a balanced view.

“AI will disrupt fields like copywriting or marketing. But in law, medicine, and other nuanced areas, humans will stick around longer,” he said. “The real question is—will you use AI to 10x your output or be left behind?”


The Future of AI: Creative Freedom and Ethical Power

AI as a Creative Equalizer

On AI-generated art and copyright debates, Gorlla is optimistic.

“AI democratises creativity. You don’t need to be an artist anymore to express big ideas,” he said. “It’s like the car replacing horse-drawn carriages—controversial at first, but inevitable.”


What Scares Him Most: The Race to Scale

Big Tech’s Obsession With Size Is Dangerous

Gorlla’s biggest fear? That companies keep making bigger black boxes without understanding what’s inside.

“We don’t need to win the AI race by brute force,” he said. “We win by building transparent, aligned, and understandable models.”

He recently spoke about this to the White House and Congress, urging the U.S. to focus on principled, democratic AI rather than racing China on model size.


AI That Reflects the User, Not the Corporation

Gorlla’s dream is clear: AI that works for people, not power.

“We’re building AI that’s safe, personalised, and reflects individual values,” he said. “That’s the only way forward.