We are excited to acquire TBPN, a team known for editorial talent, audience insight, and strong culture.  

TPPN is a place for real conversations about AI and its creators. Many of you already use the show to stay updated.  

Traditional communication doesn’t suit OpenAI. Leading tech change means fostering open conversations about AI’s impact, especially for builders and users.  

TBPN has already built this kind of space rather than trying to replicate it. It made sense to bring them in, support their work, and help them grow while keeping what makes them special. Going forward, TBPN will serve as the central hub for our dialogue with the AI community, fostering open conversation and sharing perspectives about the future of technology. Editorial independence is important. TBPN will continue to run its own programming, choose its guests, and make its own editorial decisions. This is essential to their credibility, and we are committed to protecting it in this agreement.  

We look forward to leveraging TVPN’s communication and marketing expertise to demonstrate how AI is changing daily life in new ways.  

TBPN will join our strategy group and report to Chris Lehane. Welcome, Jordi, John, Dylan, and team.  

A statement from TBPN:  

  • Over the past year, we’ve had a front-row seat not just to open air but to the entire ecosystem, covering daily news announcements and launches in real time. While we’ve been critical of the industry at times, after getting to know Sam and the OpenAI team, what stood out most was their responsiveness to feedback and their dedication to getting this right. Moving from commentary to real impact on how this technology is distributed and understood globally is incredibly important to us – Jordi Hays, co-founder and co-host of TBPN.  

About TBPN 

Technology, business, programming, network (TBPN) is a daily life tech show and one of the fastest-growing media companies. It is hosted by entrepreneurs Jordi Hayes and John Coogan on weekdays from 11 to 2 p.m. PT. The New York Times recently described TBPN as Silicon Valley’s newest obsession, led by Kogan Hayes and President Dylan Abruscato. TBPN caught fire across the technology ecosystem and can be found on X, YouTube, Spotify, Apple Podcasts, LinkedIn, Substack, and Instagram. 

Source: OpenAI acquires TBPN

OpenAI has achieved a $122 billion valuation, demonstrating that investors believe in the company’s leadership in artificial intelligence. The increased valuation also underscores OpenAI’s growing influence across the sectors it serves and its commitment to advancing the development of AI systems designed to run applications across a wide range of industries, including productivity and scientific research. The growing demand for generative AI will enable OpenAI to establish itself as the leading organisation advancing AI through its infrastructure development, partnership creation, and product innovation.  

Scaling AI for the Next Generation  

With the rapid emergence of AI technologies, there is a demand for scalable infrastructure to support new levels of complexity in application development across an even broader range of areas than previously possible. As OpenAI continues through its current phase of growth and development, the company is now looking to begin building and deploying more sophisticated models that offer much higher-quality reasoning, the ability to work with multiple modalities (text, video, and images), and instantaneous user interactions. To achieve this expanded capability through their scaling efforts, OpenAI is focused not only on enhancing the quality of performance of each model but also on increasing the overall trustworthiness, security, and dependability of all AI systems developed by OpenAI’s clients. 

With the increasing use of AI in everyday processes, the need for scalable, high-availability infrastructure is becoming imperative. The OpenAI approach acknowledges that the future of AI technology depends on both developing new technologies and implementing them at an accelerated pace and scale.  

Investment in Infrastructure and Computer  

OpenAI heavily invests in computing infrastructure, as it is critical for training and developing large AI models. The reason it requires so much investment is that advanced AI systems of the future will require significant processing power, data, and energy. As a result, computing infrastructure is a competitive advantage or differentiator.  

Therefore, increasing their compute capacity will enable OpenAI to accelerate model training cycles, improve operational efficiency, and handle a rapidly expanding number of end users. OpenAI will partner with cloud service providers and hardware vendors in order to meet these needs. These partnerships and relationships will also allow OpenAI to build a scalable business while maintaining performance levels and reliability.  

This emphasis on infrastructure underscores the growing importance of computing resources in the broader AI development landscape.  

Expanding Product Ecosystem  

OpenAI is expanding its product ecosystem by integrating AI capabilities into a wide range of applications and services. The components of the ecosystem are designed for individual consumers (e.g., chatbots) and provide developer tools and enterprise solutions, thereby creating a single, unified platform on which consumers and businesses can rely.  

Many people are looking for ways to automate their daily lives, create new content, and support decision-making. By providing a scalable, versatile platform for integrating AI into organisations, OpenAI will help improve organisational effectiveness and drive creative solutions across industries. 

With its unified product ecosystem, OpenAI can offer a range of value-based solutions, thereby establishing itself as a major player in this space.  

Enterprise Adoption and Industry Impact  

Organisations are quickly adopting AI across their businesses to leverage sophisticated models for productivity, customer engagement, and data analysis. OpenAI’s tools and services are being embraced across many industries, including finance, health care, education, and software development, demonstrating their versatility and impact.  

As OpenAI provides businesses with enterprise-grade solutions for integrating AI into their existing processes, they contribute to smarter decisions, efficiency through automation, and better outcomes across all aspects of business operation. Businesses universally recognise AI’s potential to transform how they operate and gain a competitive advantage.  

The rapid proliferation of AI across industries increases the need for scalable, reliable, and secure AI systems.  

Competition in the AI Landscape  

The increasing competitiveness of the AI industry has led technology companies to invest significant sums in research facilities and product development. OpenAI has been valued highly for its strong market position; however, to maintain its lead in this sector, it must continually innovate and efficiently execute its strategies.  

The open artificial intelligence market currently has an abundance of competitors, which are driving competition through the introduction of many new products and services. In addition to making ongoing investments in new technology through research, most of OpenAI’s competitors will quickly update their existing products through advances in generative AI models, multimodal technology, and enterprise solutions. Long-term success will depend on differentiating themselves through performance, user experience, and/or the extent of integration of their ecosystems within the overall marketplace. 
 
OpenAI will need to find the right balance between innovating new products and using current technology in ways that are practical to continue competing. 

Challenges in AI Scaling  

AI systems face significant challenges when scaled. The need for increased computing power, along with the costs of energy and data processing, creates unique challenges in maintaining efficient, low-cost models as they grow larger and more complex.  

Safety issues such as ethics and governance are also important to consider in the face of growing AI capabilities. As AI grows more powerful, addressing bias, misinformation, and proper usage will become increasingly important. OpenAI has thus highlighted the need to build safeguards and governance mechanisms for the safe deployment of AI technologies.  

The industry faces very significant challenges in balancing rapid technological innovation with a responsible approach to development.  

Partnerships and Collaboration  

Collaboration is an important part of OpenAI’s strategy because it enables it to leverage diverse skills, expertise, and resources from the broader technology community. OpenAI collaborates with public cloud providers, large companies, and research/academic institutions for AI system creation and deployment on a massive scale.   

OpenAI collaborates with many different types of partners to effectively deploy AI across a broader range of applications and use cases, ultimately delivering real value through technological innovation. By collaborating with others, OpenAI can innovate faster and have a greater impact across multiple sectors and industries. 

Future Developments in AI  

By addressing specific areas where improvements could be made (e.g., reasoning and language capabilities and real-time interaction), OpenAI has made significant advancements over the last several years in building and enhancing its models, with applications (including complex virtual assistants) being employed to help researchers conduct scientific research more easily.  

In addition to building larger models, advancements during the next wave of AI scaling will also include developing more efficient architectures and better integrating them with hardware- and software-based systems. A strong commitment to ongoing research and development will enable the uncovering of new potential and sustaining growth in the rapidly changing AI environment.  

Looking Ahead: The Next Phase of AI Growth  

The rise in OpenAI’s valuation and ongoing investment in scalable building technology demonstrate that AI can be a disruptive force in shaping our society. As OpenAI builds upon its capabilities and infrastructure, it is shaping the future of artificial intelligence by influencing how technology is developed and delivered worldwide.  

The next stage of growth will require OpenAI to provide users/organisations with powerful, reliable, and responsible AI systems that satisfy their needs. 

Source: OpenAI raises $122 billion to accelerate the next phase of AI

I am excited to announce our acquisition of TBPN. Their expertise in editorial content and deep audience connection make them invaluable partners as we pursue our mission.  

Many of you already use TBPN to stay up to date with the latest news, which is why it stands out as a place where real conversations about AI and its creators happen every day.  

Reflecting on our communication at OpenAI, I see that the usual approach doesn’t work for us. Unlike typical companies, we’re leading major technological development, with a mission to ensure artificial general intelligence benefits everyone. This means we must create space for honest, useful conversations about AI’s impact, especially among builders and users.  

TBPN has already built this kind of space. Rather than trying to do it ourselves, we thought it might make sense to bring them in, support their work, and help them grow while keeping what makes them special. Editorial independence is very important. TBPN will continue to run their own show, pick their guests, and make their own editorial choices. This is essential to their credibility, and we are committed to protecting it under the terms of our agreement.  

In addition, I’m excited to integrate their strong communications and marketing team skills with ours. Having helped many brands succeed online, their instinct for industry trends impresses me. Together, we can find new ways to help people better understand how AI affects everyday life.  

TBPN will join our strategy team supporting Chris Lehane. Welcome, Jordi, John, Dylan, and the whole team.  

This acquisition represents OpenAI’s first move into media ownership. TBPN, the leading industry talk show, will now report to Chris Lehane, OpenAI’s chief political operative.  

TBPN, hosted by John Coogan and Jordi Hays, airs three hours daily on YouTube and X, focusing on tech, business, AI, and defense.  

The show has a loyal Silicon Valley audience and features candid conversations with leaders like Mark Zuckerberg and Satya Nadella.  

TBPN will maintain its brand and editorial control, continuing as a leading independent media voice. With OpenAI’s resources and support, TBPN plans to expand its audience and enhance coverage, while remaining responsible for its daily operations and content decisions. The show is already a major success, with expected earnings of over $30,000,000 this year, according to the Wall Street Journal.  

OpenAI has its own podcast where company members have long conversations about building technology. OpenAI plans to use the founders’ amazing comps and marketing instincts beyond just the show, according to Fidji Simo, OpenAI’s Head of AGI deployment. TBPN will bring AI to the world so as to help people understand the full impact of this technology on our daily lives.  

Simo added that TBPN’s skills are important for a unique company like OpenAI, where the standard communications playbook just doesn’t apply.  

She said TBPN will maintain its editorial independence and continue to run its programming, choose its guests, and produce its own content. Editorial decisions  

Still, some people may have concerns about the deal. OpenAI, a leading AI lab on the verge of an IPO, is acquiring a popular talk show that often covers the company and its rivals. After the deal, TBPN will be part of OpenAI’s strategy team and report to Chris Lehane, who created the phrase “vast right‑wing conspiracy” to deflect press scrutiny of the Clinton White House.  

Lehane, a political strategist, has a notable background in politics and in advocacy for the crypto industry, bringing additional experience to the team.  

OpenAI CEO Sam Altman shared on social media that TBPN is his favorite tech show, expressing confidence that the acquisition will not affect TBPN’s commentary or its criticism of the company.  

From TBPN’s perspective, the acquisition is a chance to move beyond mere commentary.  

While we’ve been at times critical after meeting Sam and the OpenAI team, what stood out was their willingness to listen and focus on getting this right, Hays said, beyond simply offering commentary to shaping how technology is distributed and understood globally. 

Sources: OpenAI acquires TBPNOpenAI acquires TBPN, the buzzy founder-led business talk show 

OpenAI is piloting a memory upgrade for ChatGPT that enables it to recall information, user preferences, and workflows across conversations. This means you won’t have to repeat context every time you start a new chat, making the chatbot more personal and reliable as a long-term assistant.  

Key Aspects Of The Memory Upgrade 

  • Persistent context. ChatGPT remembers details, project settings, or coding styles from previous chats, even those from weeks or months ago.  
  • Workflow recognition: ChatGPT can remember how you approach tasks, such as your coding preferences, writing style, and specific business processes. This helps it resume from where you last left off.  
  • Two-tier system memory works two ways — saved memories (things you tell it to remember) and chat history (context it learns from conversations).  
  • User control. You can view and delete memories or turn off memory at any time. The temporary chat option avoids using or creating memories.  
  • Rollout: the feature is available first to some ChatGPT Free and Plus users. OpenAI plans to add it to Enterprise, Teams, and Education users later.  

This update aims to make ChatGPT a more personal partner and save you from repeating setup steps for complex tasks.  

Last week, OpenAI released a major update: an improved memory feature for ChatGPT. After years of helping businesses use AI, I see this as more than a minor upgrade. It signals a real shift in how we’ll work with AI assistants.  

What Is OpenAI’s New Memory Feature? 

ChatGPT can now remember and refer to everything you’ve talked about across past conversations, maintaining a lasting, Complete memory without needing explicit instructions.  

I’ve been testing this feature a lot since it launched on April 10th, and the change is clear right away. For example, when I asked about a marketing campaign, I mentioned three chats ago, ChatGPT brought up the details without any extra reminders. This new approach to memory is a major step toward more natural conversations with AI.  

Now the system uses two types of memory: saved memories (what you ask ChatGPT to remember) and ChatGPT. Chat history details it picks up from your past chats) Together, these help ChatGPT better understand your needs, making conversations easier and more useful.  

Why This Matters for Your Business 

This update solves a key business column: no repeating project context. Previously, each chat required restating details. Now, this repetitive setup is gone.  

I recently helped a marketing team use ChatGPT to generate campaign ideas across several sessions, so they didn’t spend the first 10 minutes of each meeting repeating their brand, voice, target audience, and goals. Now that repetitive setup is gone, we have tried ChatGPT projects, but we wanted our chats to connect, not just rely on reference documents.  

This change lets teams use AI as a real partner. Remembered context over weeks turns the AI into more than just a tool, creating new business possibilities.  

How It Differs From Previous Memory Capabilities  

The old ChatGPT memory only worked if you told it to remember something during a conversation; even then, it couldn’t recall it later.  

I remember the frustration of developing elaborate prompts with all the required context. I used to get frustrated having to write long prompts and background just to pick up a project from the day before. The new system removes that hassle completely and determines what’s worth remembering based on several factors.  

  • Semantic relevance to your present query  
  • Recency of the information  
  • Frequency and importance of details in past conversations  
  • Your conversational intent.  

The system saves and retrieves the most useful past details based on your current needs.  

How To Maximize The New Memory Feature 

Here are several strategies I have found especially effective for making the most of this new capability:  

Carefully choose what to ask ChatGPT to remember; you can highlight important details to help the AI focus on the most important points.  

Organize your chats by project or topic. I keep my content marketing discussions in one thread and product development in another. This helps ChatGPT better understand each area.  

Occasionally, check what ChatGPT remembers by asking, “What do you remember about my [project/preferences/company]?” This ensures it tracks the most important details.  

You remain in control of the column, delete memories, turn off memory for private chats, or use temporary chat for privacy. This balance of useful and privacy matters for business users.  

Three Powerful Business Use Cases 

  1. Continuous Knowledge Management 

A finance team can use the memory feature to help ChatGPT understand complex approval steps and compliance rules. Instead of updating documents, they can build knowledge through ongoing conversations with ChatGPT.  

When new team members need help, ChatGPT can now provide answers that include not just the official rules but also real-world tips and special cases discussed in earlier chats. Over time, it becomes a living knowledge base that gets better with each use.  

  1. Long-Term Customer Relationship Management 

A real estate agency could set up special ChatGPT accounts for its top clients, with agents discussing property needs, neighborhood preferences, and budgets with ChatGPT during meetings. The system learns more about each client’s preferences.  

Months into the home search process, ChatGPT can recall small details from early chats (remember when Mrs. Johnson mentioned loving natural light in the kitchen) to help agents find the right homes. This long-term memory enables agents to offer an individual approach that would be hard to maintain with many clients. They can transform their brainstorming process by maintaining ongoing creative dialogues with ChatGPT across multiple sessions and weeks. Rather than starting each ideation session from scratch, the team can build on concepts explored in previous chat conversations. GPT, remembering which ideas were rejected, which showed promise, and why certain approaches were preferred.  

This way of working speeds up development by eliminating repeated setup and allowing the team to refine ideas over time, much as they would with a human teammate.  

The Future of AI Assistants 

This update transforms our connection with AI tools, enabling ongoing partnerships. The assistant who helps you remember enables you to remember your idea in June.  

Persistent memory creates something approaching an actual working relationship. This lasting memory helps build a real working relationship when shared context and knowledge make each conversation more useful than the last for businesses ready to invest in these AI partnerships; the boost in productivity could be huge. (though not yet in the EU and the UK owing to regulatory considerations) weak plans to expand to team, enterprise, and education users soon. Custom GPTs will also eventually have their own separate memory capabilities.  

As we explore this new future, I believe we’re just starting to see how persistent AI memory can transform business. Yesterday’s chatbots are becoming tomorrow’s true partners — a future worth anticipating.  

Source: ChatGPT’s New Memory: How OpenAI’s Latest Feature Will Transform Your Business Workflows 

OpenAI has reached a new milestone in conversational search by adding location awareness to its platform. Starting March 26, 2026, the company began rolling out a location-sharing feature on its web and mobile apps. This change lets the system handle questions about the physical world more dynamically. Now the platform can provide real-time local answers, something previously limited to search engines and GPS- and IP-based navigation apps. Users can get tailored responses for things like neighborhood tips, local news, and weather alerts, making digital assistance more connected to their surroundings.  

The Transition to Spatial Context 

In the past, digital assistants required users to type in their location, such as New York or coffee shops in Brooklyn. The new update changes this by adding an optional location-sharing toggle in the data controls menu. When turned on, the system can use either general or exact location data to improve search results. Sharing your precise location means the assistant may give address-level accuracy, which is important for near-me searches where small distances matter.  

Having access to the location makes real-time searches much more useful. The system can now highlight local news, transit updates, and neighborhood events that might otherwise get lost in global results. For travelers and commuters, the assistant now feels more like a local guide who understands what’s happening nearby.  

Privacy Architecture And Data Sovereignty 

Because GPS data is sensitive, OpenAI uses a permission-first approach for this feature. Location sharing is off by default, and users must choose to turn it on. The system also lets users pick between sharing an approximate or precise location on the first one. With mobile devices, you can share just your city area and keep your exact address private. This gives people a way to get local updates without sharing their exact movements. It is used only for the duration of the specific request–response cycle and is deleted from the active session. Once the answer is provided, while the names of nearby locations and maps may become part of the chat history. similar to any other response the underlying law GPS coordinates are not stored long term. For younger users, parental controls have been expanded to allow guardians to disable location features entirely for teen accounts, ensuring that safety remains a core component of the deployment.  

Enhancing Real-Time Utility and Commerce 

Adding location awareness is key for the new Agentic Commerce Protocol (ACP). By knowing how close a user is to nearby stores, the system can give more accurate details about product availability. Now, if someone asks about a household item, they can see which local stores have it in stock and when each store is open. This makes the interactive interface an effective tool for local shopping, making it easier to go from finding something online to buying it in person.  

This feature is useful for local government and public services in early 2027. During its pilot, the system showed it could provide live updates on road closures, voting locations, and medical alerts based on users’ exact locations during emergencies or fast-changing events. Instant, location-based information can be a key resource, giving clear, relevant guidance from the user’s area.  

The Synthesis Of Information And Geography 

For developers and businesses, adding location awareness creates new opportunities for specialized apps and plugins. These tools can now start different workflows based on the user’s location. For example, a travel app could automatically show local currency rates and safe transit tips as soon as an employee arrives in a new country. This sort of provocative location-based help is the next step for Agentic software, which understands both the user’s question and the physical surroundings.  

This update helps users connect their intent to their surroundings. It supports invisible search, where the assistant gives information at the right time, with less effort from the user. Whether someone needs a quiet park or the nearest pharmacy, the platform now has the intelligence to help in an ever-changing world.  

The Resonance Of The Present  

As these digital threads weave into the physical texture of our neighbors, we witness a quiet structural shift in our relationship with information. It is as if the great silent archive of the world has stepped into the sunlight, learning to walk the streets and breathe the air of the cities it once only described. We move toward a horizon at which the search engine is no longer a distant observer but a watchful companion that moves the burden of the rain on our shoulders and the length of the road before our feet. Eventually, the distinction between where we are and what we know may dissolve, leaving us to inhabit a world where every corner holds a tale, and every street has a voice. We might one day wake and find the maps in our pockets have become living things, expressing the vibrant, humming vibration of a reality always awake, always local, and always ready to show us the way home. 

SourceWhat information is shared when I search? 

OpenAI has officially retired several older ChatGPT models, including the popular GPT-4o. Starting February 13, 2026, this change helps standardize the platform on the new advanced GPT-5 series.  

This move encourages users to shift to newer models, especially GPT-5.2. These updated models offer better personalization, improved creative capabilities, and fewer unnecessary rejections or overly agreeable responses, aligning with OpenAI’s goal to offer a consistent experience.  

Key Aspects of the Model Retirement 

  • As of February 13, 2026, ChatGPT no longer supports GPT-4O, GPT-4.1, GPT-4.1 mini, OpenAI-04 mini, and the previously announced GPT-5 (instant/slash thinking).  
  • Transition Focus, OpenAI reported that 99.9% of users had already shifted to newer versions, with GPT-5.2 accounting for the vast majority of daily active usage.  
  • API Continuity: This retirement applies solely to the ChatGPT consumer interface. API access to legacy models stays available for now, with developers receiving advance notice before future API sunsets.  
  • Enterprise Access: Business, Enterprise, and Edu customers can continue using legacy models through custom GPTs until April 3, 2026. After this date, these customers will no longer have access to legacy ChatGPT models.  

New reasoning and customization standards: the move to GPT 5.1 and GPT 5.2 combines the best parts of older models, such as GPT 4.0’s warmer personality, into newer systems. These systems are also more flexible. New features include  

  • Selectable base styles. Users can select base tones: e.g., Friendlier, and adjust Warmth and Enthusiasm levels directly in Settings.  
  • Improved reasoning: the new models prioritize advanced analytical skills, coding, and multi-step reasoning over simpler one-shot chatbot functions.  
  • OpenAI is developing an 18-plus model for Child GPT. This model will offer greater freedom and reduced safeguards for adults.  

This consolidation lets OpenAI accelerate innovation in personalization and creative performance. GPT-4 is rolling out now, bringing the latest advances in reasoning and agentic workflows to ChatGPT.  

Alongside the retirement of GPT-5 Instant and Thinking, GPT-4.1, GPT-4.1 Mini, and OpenAI 404 Mini will also be unavailable on ChatGPT after February 13, 2026. There is no API change at this time.  

In particular, while this announcement applies to several older models, GPT-4O deserves special context.  

After we first deprecated it and later restored access during the GPT-5 release, we learned more about how people use GPT-4 in their day-to-day. We brought back GPT-4o after hearing clear feedback from some Plus and Pro users. They told us they needed more time to transition key use cases, such as creative ideation. They also prefer GPT-4’s dialogue style and warmth.  

That feedback helped shape GPT 5.1 and GPT 5.2, leading to better personality, stronger support for creative ideas, and more ways to customize ChatGPT’s responses. You can now choose base styles and tones, like friendly, and adjust settings for warmth and enthusiasm. Our goal is to give people more control over how ChatGPT feels to use, not just what it can do.  

We are announcing the retirement of GPT-40 now because improved models are available, and most users have already switched to GPT-5.2, with only 0.1 percent still using GPT-40 daily.  

More generally, we are working to improve ChatGPT in areas users have identified as needing attention. This includes improving character and creativity, and reducing unnecessary refusals caused by overly cautious responses. Updates are coming soon. We are also developing a version of ChatGPT for adults over 18, focused on giving adults more choice and freedom with suitable safeguards to help us. To help with this, we have introduced age prediction for users under 18 in most markets.  

We know changes like this take time to get used to. So, we are providing advance notice. Model access will change on February 13, 2026, and limited enterprise access will end on April 3, 2026. We understand that losing access to GPT-40 may be frustrating, and this decision was made with care. Retiring models helps us focus on improving the ones most people use now.

Source:   Retiring GPT‑4o, GPT‑4.1, GPT‑4.1 mini, and OpenAI o4-mini in ChatGPT 

OpenAI has redefined ChatGPT, transforming it into a powerful AI product discovery and shopping platform that challenges traditional search engines.  

Building on these feature enhancements, the new features will be available to all users Free, Go, Plus, and Pro starting in late March 2026.  

New Shopping And Discovery Features 

  • Conversational shopping research: users describe what they want. ChatGPT delivers tailored buyers’ guides by analyzing web data, reviews, and product details.  
  • Visual search and comparison: users load images or describe needs. ChatGPT delivers instant side-by-side comparisons of products, prices, reviews, and features, eliminating the need to browse multiple sites.  
  • Memory integration, ChatGPT recalls prior conversations to better understand your preferences and refine recommendations.  
  • In targeted categories, the system excels in detailed domains like electronics, beauty, home and garden, and fashion.  

Commerce Integration (Agentic Commerce Protocol Or ACP) 

  • Merchant Partnerships, Leading Retailers, Target, Sephora, Nordstrom, Lowes, Best Buy, Home Depot, and Wayfair are now integrated with ChatGPT.  
  • Direct product feeds. Merchants use OpenAI’s ACP to deliver real-time product feeds. This ensures data, pricing, and availability remain accurate throughout conversations.  
  • E-commerce expansion is moving on to instant checkout. OpenAI is integrating Shopify and other platforms. Merchants can build custom chat app experiences.  

Impact On Search And Marketing 

  • Rethinking speech, this update shifts product discovery from keyword searches to dynamic AI-driven recommendations.  
  • Advertisement-free experience potential. The platform features organic, unsponsored recommendations. Beginning late March 2026, OpenAI will trial free ads for Go users in the US to ensure service sustainability.  
  • SEO shift: marketers must now optimize for AI-driven discovery, not just traditional search engines. Structured data and dialogue content take priority.  

All together, this update marks a major shift toward AI as the main starting point for shopping, handling millions of shopping-related questions every day.  

AI is reaching a point where everyone can have a personal assistant to support their learning and productivity. Who gains access to this technology will decide if AI expands opportunity or reinforces existing inequalities.  

We want to make powerful AI available to everyone. Since August, we have launched ChatGPT Go, our free and low-cost subscription, in 171 countries. Now Go is coming to the US and all places where ChatGPT is available. For $8 a month, you get more features, including messaging, image creation, file uploads, and storage. Soon, we will also start testing ads in the US for the free and Go plans. This will help more people use our tools with fewer limits or for free. Pro, Business, and Enterprise plans will stay ad-free.  

As we introduce ads, our focus remains on preserving what makes ChatGPT valuable. You can trust that answers are based on helpfulness, not advertising. Your data is kept private and not sold to advertisers. You also have control over ad relevance and personalization.  

With that in mind, here are the principles that guide how we approach advertising:  

Our Advertising Principles 

  • Mission Alignment Advertising should help make AGI accessible to all.  
  • Answer Independence Ads never affect the answers you get from ChatGPT. Answers are always based on what’s most helpful to you. Ads are kept separate and clearly marked.  
  • Conversation privacy: Your ChatGPT conversations are private and are not shared with advertisers.  
  • With choice and control, you can personalize and clear ad data at any time. There will always be an ad-free paid option.  
  • Long-term value, trust, and experience matter most, not maximizing time on ChatGPT.  

We plan to start testing ads for logged-in adults in the US on the Free and Go plans soon. The testing will begin in the coming weeks and roll out gradually. Ads will appear at the bottom of ChatGPT answers. When there is a relevant sponsored product or service related to your conversation, ads will be clearly marked and kept separate from regular answers. You will be able to see why you are seeing an ad, dismiss it, and provide feedback. During this initial testing phase, we won’t show ads to users under 18 or near sensitive topics like health, mental health, or politics.  

The best ads are helpful, entertaining, and help people find new products and services. With AI, we’re excited to create new advertisement experiences that are more useful and relevant than ever before. Chat-based interfaces let people do more than just click links; for example, you might soon see an ad and be able to ask questions to help you decide what to buy.  

Ads can also help small businesses and new brands compete. AI tools make it easier for anyone to create great experiences that help people find options they might not have seen before.  

We’ll listen to feedback and keep improving how ads appear. But our promise to put users first and keep our trust remains the same as we build our ad platform. With these principles, we can ensure our goals align with what people want from ChatGPT. We are focused on creating products and experiences that people and businesses value enough to pay for. Our enterprise and subscription services are already strong, and we believe ads can help make AI even more accessible as part of a balanced revenue model.  

When ad testing begins, we’ll seek feedback to ensure ads help expand AI access while maintaining trust in ChatGPT. 

SourceOur approach to advertising and expanding access to ChatGPT 

Right now, we are moving from models that excel at specific tasks to agents that can handle more complex workflows. When you prompt a model, you only get its trained intentions, but if you give it a computer environment, it can do much more, like run services, request data from APIs, and/or create useful things like spreadsheets and reports.  

When building agents, some practical problems come up. For example:  

  • You need to decide where to store intermediate files.  
  • Avoid pasting large tables into prompts.  
  • Give workflows network access without causing security issues.  
  • Handle timeouts and read-rides without building your own workflow system.  

To address these agent-specific challenges, we built the components needed to give the Responsys API a computer environment. By doing this, we enable reliable management of real-world tasks, freeing developers from having to create their own execution setups. This sets the stage for tackling the broader practical problems faced in agent development.  

OpenAI’s API shell tool and hosted container workspace address these challenges. The model suggests steps and commands that run in a separate environment with its own filesystem, optional storage (e.g., SQLite), and limited Network Access.  

With this foundation in place, let’s explore how we build a computer environment for agents and discuss early lessons from using it to accelerate, standardize, and improve safety in production workflows.  

The Shell Tool 

A good agent workload needs a tight execution loop:  

  1. The model suggests an action.  
  1. The platform executes it.  
  1. The result informs the next step.  

We’ll start with the shell tool to illustrate this loop, then discuss the container, workspace, networking, reusable skills, and context. Compact Shenoy  

To understand the shared tool, know how a model uses tools. It suggests tool calls after seeing step-by-step examples during training. The model proposes tool use but can’t execute the calls itself.  

The shell tool gives the model command-line access to perform tasks like text search or API requests using familiar Unix utilities such as grep, curl, and awk.  

Unlike our current code interpreter, which runs only Python, the Shell Tool supports a much broader range of use cases. You can run GO or JAVA programs or start a Node.js server. Such flexibility enables the model to handle more complex tasks.  

Orchestrating The Agent Loop 

On its own, a model can only propose shell commands, but how are these commands executed? We need an orchestrator to retrieve model output, invoke tools, and return the tools’ response to the model in a loop until the task is complete.  

The Responsys API is how developers interact with OpenAI models when used with custom tools. The Responsys API returns control to the client, who must provide their own harness to run the tools. However, this API can also orchestrate between the modern and hosted tools out of the box.  

When the Responsys API receives a prompt, it assembles model context: user prompt, prior dialog state, and tool instructions. For shell execution to work, the prompt must mention using the Shell Tool, and the selected model must be trained to propose shell commands. Models GPT-5.2 and later are trained to do so with all of these contexts.  

The model then decides the next action. If it chooses shell execution, it returns one or more shell commands to the Responsys API service. The API service forwards those commands to the container runtime, streams the shell output back, and feeds it to the model in the next request’s context. The model can inspect the results, issue follow-up commands, or produce a final answer. The Responsys API repeats this loop until the model returns a completion without additional shell commands.  

When the Responsys API runs a Shell command, it keeps a streaming connection to the Container Service open. As output appears, the API sends it to the model almost immediately. This lets the model decide whether to wait for more output, run another command, or issue a final response.  

The model can suggest several shell commands at once. The Responsys API can run these commands concurrently in separate container sessions. Each session streams its output separately. The API then combines these streams into structured tool outputs for context. This allows the agent loop to run tasks such as searching files, fetching data, and checking results in parallel.  

Commands that handle files or process data may generate lots of shell output. This can fill a context space without adding much value. To manage this, the model sets an output limit for each command. The Responsys API enforces the limit and returns a result that keeps both the start and end of the output, marking skipped content. For example, you might set an AF1000 character limit, keeping the beginning and end.  

By combining concurrent execution and output limits, the agent loop maintains speed and context efficiency. The agent loop controls which tool outputs are included in the context, helping the model focus on important results rather than being overwhelmed by raw terminal logs.  

When The Context Window Gets Full: Compaction 

A challenge with agent loops is that some tasks run for a long time. These long tasks can fill up the context window, which tracks information across turns and agents. For example, an agent might call a skill, get a response, and then make turn calls and summaries. The Limited Context Window can fill up quickly, keeping important details while removing extraneous information. We built native compaction into the Responsys API. Developers don’t need to create their own summarization or state systems, and the feature matches model training.  

Our latest models are trained to review prior dialog states and generate a compaction item that stores key information in an encrypted, token-efficient format. After compaction, the context window includes this compaction item and the most important parts of the earlier window. This makes workflow progress smooth across window boundaries, even in long, multi-step, or tool-driven sessions. Codex uses this system to handle long programming tasks and repeated tool use without losing quality.  

You can use compaction either as a built-in server feature or through a separate /compact endpoint. With server-side compaction, you can set a threshold, and the system takes care of compaction timing for you, so you don’t need complex client logic. This setup allows a slightly larger input context window, so small overages just before compaction are handled rather than rejected. As models improve, the native compaction feature updates with every OpenAI model release.  

Codex played a key role in building the compaction system. It was one of the first to use it. If one Codex instance hit a compaction error, we started another instance to investigate. This process helped Codex develop a strong built-in compaction system by working through the problem. Codex’s ability to examine and improve itself has become unique to OpenAI. While most tools just need users to learn them, Codex learns with us.  

Container Context 

Now let’s talk about State and Resources. The container is more than merely a place to run commands. It’s also the model’s working environment. Inside the container, the model can read files, query databases, and reach external systems, all under network policy controls.  

File Systems 

The first part of the container context is the file system, which is used to upload, organize, and manage resources. We created container and file APIs to give the model a clear view of available data and help it select specific file operations rather than run broad energy scans.  

All inputs are directly into the prompt context. As inputs grow, filling the prompt becomes more expensive and harder for the model to navigate. A better approach is to stage resources in the container’s file system and let the model decide which to open, pass, or run via shell commands, much as humans do. Models work better with organized information.  

Databases 

The second part of the container context is databases. We recommend storing structured databases in SQLite and varying them directly rather than copying a spreadsheet into the prompt. Describe the tables and columns, and explain their meanings so the model can pull only the needed rows.  

For example, if you ask which products had declining sales this quarter, the model can look up only the relevant rows rather than search the entire spreadsheet. This approach is faster, cheaper, and better suited to large data sets.  

Network Access 

The third part of the container context is Network Access, essential for agent workloads. Agents may need to fetch live data, call external APIs, or install packages. Giving containers full internet access can be risky, as it allows them to store information outside sites, access sensitive systems, or make it harder to prevent leaks.  

To solve these problems without limiting what agents can do, we set up hosted containers to use a central egress policy proxy. All ongoing network requests go through a central policy layer that enforces allow lists and access controls and keeps traffic visible. For credentials, we use Domain Scoped Secret Injection at egress. The model and container only see placeholders, while the real secret values remain hidden and are used only for approved destinations. This reduces the risk of leaks while still allowing secure external calls.  

Agent Skills 

Shell commands are powerful, but many tasks follow similar multi-step patterns. Agents often must replan and relearn, leading to inconsistent results. Agent skills package these patterns into reusable building blocks. A Skill Easy Folder with a Skill.MD File and Resources such as API Specs and UI Assets.  

This structure maps naturally to the runtime architecture we described earlier. The container provides persistent files and an execution context, and the shell tool provides the execution interface. With both in place, the model can discover scaled files using shell commands (ease, cat, etc.) when needed, interpret instructions, and run scaled scripts within the same agent loop.  

We provide an API to manage skills on the OpenAI platform. Developers upload and store skill folders as versioned bundles, which can later be retrieved by skill ID before sending the prompt to the model. The Responsys API loads the skill and includes it in the model context. The sequence is deterministic.  

  1. Fetch skill metadata, including name and description.  
  1. Fetch the scale bundle, copy it into the container, and unpack it.  
  1. Update model context with skill metadata and the container path.  

When deciding if an SQL is relevant, the model reviews its instructions step by step and runs its scripts using shell commands in the container.  

How Agents are Made 

To put all the pieces together: the Responsys API handles orchestration, provides shell tools, runs actions, supports a double-step container, provides an open system runtime context, supports skill add, provides reusable workflow logic, and enables compaction to let an agent run for a long time with the context needed for an end-to-end workflow.  

Discover the right scale, fetch data, and transform it into a local structured state. Query it efficiently and generate durable artifacts.  

Make Your Own Agent 

For a step-by-step example using the shell tool and computer environment, see our developer blog post and cookbook. These resources show how to package and run a SQL with the responses API.  

We’re eager to see what developers build. Language models go beyond creating text, images, and audio. We’ll continue to enhance our platform for complex real-world tasks at scale.

Source: From model to agent: Equipping the Responses API with a computer environment