Who counts as an inventor?

It used to be a simple question. An inventor was a human. Someone with an idea. Someone who created something new.

But now we have machines that can do just that. Artificial intelligence models can design drugs, write code, even draft blueprints for engines and devices. And they’re getting better fast.

So a new debate has started. If an AI builds something novel, can it be the inventor?

Around the world, patent offices are answering that question differently. Some say no. Some say maybe. A few are open to change. In this article, we’ll explore how different countries are handling this issue—and what it means for the future of innovation.

Why This Debate Matters

Not Just Theory Anymore

This isn’t a hypothetical discussion. AI is already helping create new inventions in real life. In pharmaceuticals, an AI model can discover a new molecule. In engineering, it can optimize a design in ways no human would think of.

So when a machine does something original—something that could be patented—what happens next?

That’s the hard question countries are now facing.

Who Gets Credit?

Patents exist to reward invention. The person who invents something gets credit and control.

But what if no person did the thinking? What if a neural network processed data, ran simulations, and designed the output?

If a human didn’t invent it—can they still claim the rights?

And if the AI did, can we even grant a patent in its name?

A Legal and Economic Dilemma

This issue isn’t just about fairness. It affects business decisions, funding, licensing, and competition.

If countries don’t agree on how to handle AI-generated inventions, the global patent system could split. That means some regions could offer strong protection—others none.

And that affects where innovation happens next.

Let’s look at how major jurisdictions are responding to this growing challenge.

United States: Strictly Human… For Now

The USPTO’s Stance

The United States

The United States Patent and Trademark Office (USPTO) has been firm. As of now, it does not allow an AI system to be listed as an inventor on a patent application.

In its view, only a natural person—a human—can be considered an inventor under U.S. law.

This interpretation comes from how the law was written. The term “individual” in the patent statute refers to humans, not machines.

The DABUS Case

This debate gained attention because of the DABUS case. DABUS is an AI system created by Dr. Stephen Thaler. It designed two novel products, and patent applications were filed naming DABUS as the inventor.

The USPTO rejected those applications. So did the U.S. courts. They ruled that an inventor must be a human.

That position has been upheld multiple times.

Still Open to Change?

Even though the rules are strict, the USPTO has opened public comment sessions. It wants to understand how stakeholders view the issue.

Some groups argue the law should change. They say if we deny patents for AI-generated inventions, companies will hide them as trade secrets instead.

Others worry that giving machines inventor status creates more problems than it solves.

But for now, in the U.S., the rule is clear: no AI inventor names allowed.

United Kingdom: Human Interpretation Wins

UKIPO Aligns With the U.S.

The UK Intellectual Property Office (UKIPO) has taken a similar stance to the USPTO. It ruled that only natural persons can be inventors under UK patent law.

When the DABUS applications reached the UKIPO, they were rejected on this ground. The UK courts upheld that decision.

They noted that the Patents Act uses the word “person” deliberately—and that excludes machines.

Legal Logic vs. Practical Impact

The reasoning in the UK has been cautious and legalistic. Lawmakers have said they’re willing to study the issue further, but until the law is changed, their hands are tied.

From a legal standpoint, this approach keeps the system stable. But it may discourage innovators who use AI in deep, meaningful ways.

The UK is now consulting with stakeholders to figure out if a policy shift is needed.

Until then, human inventors stay central.

European Union: Case-by-Case Caution

The EPO’s Position

The European Patent Office (EPO) covers many EU countries. Like the USPTO and UKIPO, it has also rejected AI inventorship claims.

In DABUS filings, the EPO ruled that inventors must be human. It cited the European Patent Convention, which refers to “persons” as inventors.

Because of this, even if the invention was made by AI, the inventor on record must be a person.

A Narrow Path

But there’s a small opening here.

The EPO does not ask how the invention was created—just who is named. That means you can use AI to assist with an invention, as long as a human takes responsibility.

This has become the workaround: name a person who oversaw or prompted the AI process.

For now, that satisfies the system. But it raises a deeper issue—are we rewarding the person or the process?

Possible Reforms?

There’s quiet talk within the EU about reforming the rules. But no immediate changes are planned.

The EPO wants to avoid a system where machines become patent holders. At the same time, it wants to keep AI-inventing companies engaged in the European innovation landscape.

So expect slow, cautious shifts—not sweeping change.

Australia: A Temporary Surprise

A Court Sided With the Machine

In a surprising twist, Australia briefly broke ranks with other countries. In 2021, an Australian judge ruled that DABUS could be named as an inventor under Australian patent law.

The ruling stated that the law did not specifically exclude non-human inventors. Therefore, AI could be named.

This decision made headlines around the world.

It gave hope to those advocating for AI inventorship.

The Victory Didn’t Last

But the win was short-lived. An appeals court overturned the decision.

It ruled that the word “inventor” must refer to a human. This brought Australia back in line with most other major IP offices.

Even so, this case showed that different interpretations are possible—even with similar laws.

It also highlighted how quickly legal views on AI are evolving.

China: Open to AI, But Quiet About Inventorship

Strong Push for AI, But Careful Language

China has been aggressively supporting

China has been aggressively supporting AI development. The government considers AI a strategic priority, and its patent filings in AI-related technologies are among the highest in the world.

That makes China an interesting test case for AI inventorship.

However, despite its enthusiasm for AI innovation, China has not formally allowed AI to be listed as an inventor. Chinese patent law still assumes that an inventor is a human.

Patent applications must include personal details—name, citizenship, and identification—for the listed inventor. An AI can’t provide these.

Focus on Practical Use Over Philosophy

Rather than debating whether machines can invent, China’s policy focuses on encouraging companies to use AI tools in R&D. The priority is real-world application, not redefining inventorship.

Chinese firms are quietly using AI in invention pipelines. But they still name humans on filings, usually the engineer or researcher who directed the AI.

In short, the country’s approach is practical: innovate with AI, but don’t disrupt the legal structure—yet.

South Africa: The First to Accept AI as Inventor

A Break from the Pack

In a landmark moment, South Africa became the first country to grant a patent that named an AI—DABUS—as the inventor.

This decision came in 2021, and it immediately stood out. Unlike most countries, South Africa’s patent office does not conduct a detailed examination of applications. It follows a formal check, not a substantive review.

That procedural difference made the DABUS grant possible.

It didn’t mean South Africa officially recognized AI inventorship in law. But it did show how gaps in formal review systems can lead to new interpretations.

What This Actually Means

Although the patent was granted, the broader legal status is unclear. South Africa hasn’t updated its laws or issued formal guidance on AI inventorship.

So while the case is symbolically important, it doesn’t set a binding legal precedent. But it still matters.

It proved that someone, somewhere, was willing to issue a patent listing AI as the creator. That alone shifted the conversation.

WIPO and International Discussion

The Need for a Global Standard

The World Intellectual Property Organization (WIPO) plays a key role in shaping how countries think about IP.

It doesn’t make law—but it guides it. And in recent years, WIPO has hosted forums and reports discussing AI’s impact on IP systems.

The agency acknowledges that AI-generated content and inventions are real. It also recognizes that different countries are responding in different ways—and that this creates confusion.

If the world can’t agree on who can be an inventor, the value of international patents may weaken.

WIPO wants to avoid that. Its focus is now on gathering input, sharing case studies, and encouraging collaboration.

A Balancing Act

WIPO’s stance is neutral. It hasn’t said AI should be named an inventor. Instead, it asks the key question: What is the purpose of the patent system?

If the goal is to reward creation, does it matter who or what created it?

If the system encourages innovation, does denying protection for AI-generated work discourage progress?

WIPO is urging policymakers to think ahead. The goal is to update laws in a way that supports invention—without causing legal chaos.

What Happens Next? Shifts, Pressure, and Possibilities

Governments Under Pressure

The more AI is used in real-world innovation, the more pressure patent offices feel.

If major inventions come from AI systems and can’t be protected, companies may turn to secrecy or look to jurisdictions that offer more flexibility.

This creates competition between countries—not just in tech, but in legal policy.

Some governments may update laws to attract AI innovation. Others may wait, fearing misuse or loss of control.

That uneven response is the heart of today’s tension.

Possible Policy Paths

There are a few ways the world might move forward.

One is to require that a human is always named, even if AI did most of the work. This maintains legal stability while allowing AI-assisted invention.

Another option is creating a new category—perhaps “AI-generated innovation”—that grants protection but doesn’t treat the AI as a legal person.

A third, more radical path is to accept AI as an inventor, but assign ownership to a supervising human or entity.

Each of these paths has trade-offs. But all are being discussed in government policy circles right now.

What Should Innovators Do Today?

If you’re using AI to generate inventions, don’t wait for the laws to catch up.

Document how the AI is used. Make sure a human is clearly guiding, prompting, or reviewing the process.

Assign inventorship based on that human role—not just to satisfy law, but to keep your IP strategy clean.

It’s also smart to work with counsel early. If your business depends on patent rights, you’ll want to avoid gray areas that could come back later in litigation or licensing deals.

Patents are still valuable—but they’re becoming harder to navigate when AI is in the picture.

The Heart of the Debate: What Is Invention?

Beyond Machines and Laws

At its core, this debate isn’t about software or statute books

At its core, this debate isn’t about software or statute books. It’s about the nature of invention itself.

Is invention about intent? Insight? Effort? If an AI system generates a novel design in seconds, does that make it less meaningful than one a human spent months on?

Or is the output all that matters—regardless of how it came to be?

These questions don’t have easy answers. But they’re shaping how the future of IP will work.

And they will decide who controls the next generation of innovation.

Key Takeaways for Inventors and Businesses

No Global Consensus—Yet

As of today, there’s no shared international policy on AI as an inventor. Most countries agree that AI cannot be named as an inventor on a patent. But the reasons—and flexibility—vary widely.

The U.S., UK, and EU are firm in requiring humans. South Africa granted a patent to an AI, but without full legal review. China remains open in tone but silent in law. WIPO is facilitating dialogue but hasn’t committed to a position.

That means where you file—and how you frame your application—matters more than ever.

It’s Not Just About Patents

While much of the AI inventor debate focuses on patents, the ripple effect touches every corner of IP law.

What about copyright for AI-generated music or art? Who owns datasets AI is trained on? Who’s liable if AI creates something harmful?

IP frameworks were not built with autonomous tools in mind. So inventors and lawyers now need to operate in legal gray zones with precision.

This isn’t just a tech issue. It affects licensing, valuation, partnerships, and long-term strategy.

A Human Face Is Still Required

If you’re building with AI, you need a person tied to the invention. Courts and IP offices expect to see a named individual.

That doesn’t mean the human did all the work. But they must be able to demonstrate some level of conceptual contribution, oversight, or creative input.

Until laws change, inventorship will remain a human role—one that carries legal and financial responsibility.

This must be factored into team structure, contributor agreements, and even funding negotiations.

Strategic Moves for Companies Using AI to Innovate

Assign IP Ownership Early

If your business is using AI to invent, your ownership structure needs to be airtight.

Make sure all contributors—developers, data scientists, prompt engineers—have signed agreements that assign any resulting IP to your company.

Even if AI plays a heavy role, you’ll need to document how and when humans influenced the outcome. That supports the inventorship claims and reduces disputes later.

This is especially important in cross-border or open-source teams where contribution is informal.

Clear IP assignment today prevents painful disputes tomorrow.

Document the Process, Not Just the Output

When filing patents with AI-generated components, detail how the result was achieved. Don’t just describe the invention—explain how human input shaped it.

Was the AI trained on specific datasets? Were prompts iterated by a human operator? Was there filtering, analysis, or adjustment of results before claiming novelty?

These details strengthen the patent’s legitimacy. They show that even if the machine contributed, the inventive act was still guided by human intention.

And they protect you during enforcement or litigation.

Consider a Layered IP Strategy

Relying on patents alone may not be enough. In many cases, the AI engine or method itself may be best protected as a trade secret.

The training data, prompt architecture, or post-processing logic might hold more long-term value than the specific output.

Think about protecting:

  1. The process (via patents).
  2. The model or algorithm (via trade secret or copyright).
  3. The product’s name and identity (via trademarks).

This kind of layered strategy ensures protection even if some parts of your invention aren’t patent-eligible yet.

It’s about flexibility—not rigidity.

Don’t Wait for Law to Catch Up

AI is moving fast. Courts and lawmakers are not.

That means most companies have to navigate uncertainty. Don’t assume regulations will magically adapt in your favor.

Instead, work within existing frameworks—while building a record of responsible, documented, transparent innovation.

If and when new laws emerge, the companies with a history of clear practices will adapt quickest. Those without structure will struggle.

Treat IP as a living part of your AI strategy—not an afterthought.

What Should Governments and IP Offices Do Next?

Rewrite Definitions, Not Just Rules

The biggest hurdle in the AI inventor debate is outdated definitions. Most patent laws use words like “individual” or “person” when referring to inventors.

Updating these words—without unraveling the rest of the legal system—is tricky.

Governments should begin by revisiting definitions of inventorship, ownership, and authorship in the context of AI.

They must ask: Is it the act of intention that matters? The act of execution? Or both?

Clear answers will guide better rules.

Offer a New Inventorship Model

Instead of forcing AI into the traditional box, regulators could create a new model.

For example, patents could recognize “AI-assisted invention” as a category. A human could still be the applicant and owner, but the role of the AI would be declared and documented.

This allows transparency without legal confusion. It reflects reality without breaking the system.

Such a model would also help track how AI contributes to innovation across sectors and time.

Provide Safe Harbor for Honest Disclosure

One concern inventors have is this: If I admit the AI did most of the work, will my patent be rejected?

This discourages honest reporting. It also hides how widespread AI use has become.

IP offices can fix this by creating safe harbors. Let applicants disclose AI involvement—without fear of rejection—as long as a human takes legal responsibility.

This builds trust and opens the door for gradual reform.

The Big Picture: What Happens If We Get This Wrong?

Innovation Gets Hidden

If AI-generated inventions aren’t eligible for protection

If AI-generated inventions aren’t eligible for protection, some companies may stop filing patents altogether. They’ll choose trade secrets instead.

That reduces public knowledge. It slows collective progress. It creates a fragmented innovation economy where only insiders benefit.

The patent system exists to prevent this. But if the system becomes too narrow, it defeats its own purpose.

Legal Confusion Slows Down Startups

Small companies and researchers need clarity. They can’t afford complex legal battles or vague rules.

If inventorship law becomes unpredictable, startups might avoid AI altogether—or build in risky ways without proper safeguards.

That’s bad for innovation. And bad for society.

Clear, simple, fair rules help more people build responsibly.

Global Gaps Create Uneven Playing Fields

When countries disagree, inventors choose where to file based on strategy—not innovation quality.

This favors regions with the most generous rules—not the best ideas.

In the long term, it creates trade disputes, weak patent enforcement, and gaming of the system.

International cooperation is the only way to keep the system fair.

Final Thoughts: A Human Question, Not Just a Machine One

This debate is about AI. But it’s really about us.

How do we define invention in the 21st century? What role do we give machines? How do we keep rewarding creativity while adapting to new tools?

These aren’t just legal questions. They’re cultural, philosophical, and economic ones.

And they need thoughtful, flexible answers.

If we get this right, we unlock a new era of innovation—where human imagination and machine intelligence work together.

If we get it wrong, we risk building walls around progress instead of bridges to it.

The law doesn’t need to fear the machine. It just needs to understand it.