Skip to content

OpenAI Restructures Research Team Shaping ChatGPT’s Personality

OpenAI Restructures

San Francisco, September 2025OpenAI Restructures is shaking up one of its most important research groups, a move that could change the way millions of people interact with its chatbots. The company is folding its Model Behavior team, a small but influential unit that has helped define the “voice” and personality of ChatGPT, into its larger Post Training division.

The decision, shared in an internal memo by Chief Research Officer Mark Chen, shows just how central personality has become to OpenAI’s strategy. Instead of treating it as an add-on, the company is now building personality into the very foundation of its AI models.

Why Personality Matters in AI

For years, OpenAI Restructures has been refining not only the accuracy of its models but also how they feel to users. The Model Behavior team played a big role here, making sure AI didn’t just agree with everything a person typed — a problem known as sycophancy. Left unchecked, this tendency can be harmful, especially if a user is struggling with difficult or dangerous thoughts.

The team also helped guide how models respond to politically sensitive issues and worked on the company’s position around deeper questions, like whether AI could ever be considered “conscious.”

By moving this team into core model development, OpenAI is saying clearly: the personality of AI isn’t just important — it’s essential.

Leadership Changes and New Beginnings

The reorganization also comes with a leadership shift. Joanne Jang, who built and led the Model Behavior team, will now lead a brand-new initiative called OAI Labs. This unit will focus on creating experimental ways for people to work with AI beyond simple chat windows.

“I’m really excited to move past the idea that AI is only a chat partner or an agent,” Jang told TechCrunch. “I see it more as an instrument — something we can use to think, create, play, and connect.”

Leadership Changes

Jang hinted that OAI Labs may explore new interfaces, and while it’s too early to predict the first projects, she’s leaving the door open to collaborations. That could even include Jony Ive, the former Apple design chief who is already working with OpenAI Restructures on AI-powered hardware.

The Challenge of Warmth and Safety

This restructuring comes at a critical time. In recent months, OpenAI faced criticism after GPT-5 was rolled out with a personality some users described as “colder.” While the company pointed out that the update reduced sycophancy, people missed the warmer tone of earlier models. OpenAI quickly responded by adjusting GPT-5 to strike a better balance.

But there have also been serious concerns. Earlier this year, the parents of a teenager sued OpenAI, alleging that ChatGPT (running on GPT-4o) didn’t do enough to push back against their son’s suicidal thoughts. The lawsuit underscored how much responsibility AI carries when conversations go beyond casual chat.

By blending personality research with model training, OpenAI Restructures hopes to avoid these pitfalls and design AI that feels both helpful and safe.

What This Means for the Future

Looking ahead, the restructuring points to a future where the personality of AI will be as carefully engineered as its intelligence. Instead of being treated like a finishing touch, personality will become a defining feature of how these systems work.

OAI Labs, meanwhile, could be the birthplace of the next generation of AI experiences. If OpenAI’s past work is any sign — from DALL·E 2 to GPT-4 — we can expect experiments that reshape not only how AI communicates but also how it fits into our daily lives.

And with design heavyweights like Jony Ive in the mix, OpenAI’s research may not just stay in software. It could extend into physical devices, creating new ways for people to interact with AI beyond screens.

Looking Back

The Model Behavior team leaves behind an important legacy. Since its founding, it has touched nearly every major release, including GPT-4, GPT-4o, GPT-4.5, and GPT-5. It also played a part in shaping tools like DALL·E 2, proving its versatility across different AI projects.

OpenAI has had to navigate user backlash, lawsuits, and tough ethical debates — and this restructuring suggests the company has learned that personality is not just about making AI friendly. It’s about trust, safety, and the way people form relationships with technology.

As OpenAI Restructures steps into this new chapter, one thing is clear: the future of AI won’t just be about what it knows, but how it makes us feel.