Take-Two’s AI Head Just Dissolved His Own Team
In This Article
01What Happened at Take-Two
02The Games Industry’s Complicated Relationship with Generative AI
03What Game AI Actually Needs to Work
04What Comes Next
05Frequently Asked Questions
LinkedIn Post Viral
SAG-AFTRA AI Protections 2023
Ahead of Org Readiness
In an industry full of AI hype, Take-Two Interactive’s Head of AI Luke Dicken did something almost nobody does: he publicly admitted that his AI team was dissolved because the organisation wasn’t ready to absorb what they were building. His viral LinkedIn post — describing a team working “ahead of what the org was ready to absorb” — cut through the usual corporate AI triumphalism to expose something real about where generative AI actually stands in the games industry. The story is more nuanced, and more instructive, than the headline suggests.
What Happened at Take-Two

Luke Dicken held the title of Head of AI at Take-Two Interactive — the parent company of Rockstar Games, 2K, and Private Division. His mandate was to build AI capability across the publisher’s studio portfolio, identifying use cases where generative AI could accelerate game development, improve NPC behaviour, or assist creative teams. By his own account, the team delivered technically. The problem was on the other side of the equation.
In his LinkedIn post, Dicken described the central challenge not as technical failure but as organisational mismatch: “We were building things the studios genuinely found impressive — but impressive isn’t the same as integratable. When you’re ahead of what the org was ready to absorb, you’re essentially building for a future version of the company that doesn’t exist yet.” This is an unusually honest diagnosis from someone inside a major publisher, and it resonated widely precisely because it maps onto experiences across the industry.
According to The Verge’s coverage, the team’s dissolution was not a response to AI failure or executive disillusionment with AI broadly — Take-Two leadership remains committed to AI integration. Rather, the decision reflected a reassessment that the right model is embedding AI capability within individual studio teams rather than operating a centralised AI function that acts as a technology push organisation.
Take-Two’s restructuring represents a broader industry reckoning: centralised AI functions that develop capability and then push it into studios run into a structural mismatch. Studio teams adopt technology when it solves a problem they’re currently feeling, not when a separate team brings them something impressive. Embedded AI capability — built inside studio teams alongside production pressures — has a fundamentally different adoption dynamic.
The Games Industry’s Complicated Relationship with Generative AI
Take-Two is not alone. Ubisoft’s AI content generation projects have faced repeated timeline delays and internal resistance from creative staff. EA’s generative AI initiatives — announced with considerable fanfare in 2024 — have produced few publicly demonstrated results and have been significantly scaled back. The games industry presents a uniquely difficult environment for AI adoption, for reasons that are structural rather than attitudinal.
The SAG-AFTRA 2023 strike AI protections established contractual guardrails around the use of AI-generated voice and likeness in games — guardrails that limit certain AI applications and create compliance complexity for publishers deploying AI at scale. These protections were the direct result of game developers’ legitimate concerns about AI displacing voice acting talent, and they have materially constrained the speed at which AI voice and character tools can be deployed in triple-A production pipelines.
The Game Developer analysis identifies a second structural constraint: the long production cycles of major game titles. A triple-A game that entered production in 2024 will ship in 2027 or later. AI tools being built today — even excellent ones — cannot be fully integrated into that production without disrupting processes that are already too far along to change. The games industry’s AI adoption lag isn’t reluctance; it’s physics.
Triple-A games take 4–6 years to develop. AI tools built in 2024 can’t be retrofitted into a production that’s already 60% through its pipeline without destroying the workflow. The first wave of games that were designed from day one with AI tools built into the pipeline won’t ship until 2028–2030. The industry’s “slow adoption” isn’t resistance — it’s the unavoidable consequence of how long it takes to make a modern game.
What Game AI Actually Needs to Work

The AI applications that are gaining traction in game development share common characteristics: they reduce grunt work without requiring creative sign-off, they integrate with existing toolchains rather than replacing them, and they don’t touch content areas covered by union agreements. Procedural environment generation (using AI to vary terrain, populate worlds, and create asset variants) fits this profile well — it doesn’t displace artists but extends their output.
AI-assisted QA and playtesting is another area of genuine traction: systems that automatically play through game builds, identify boundary condition bugs, and generate regression test cases. This is unglamorous but economically significant — QA costs represent 10–15% of triple-A game budgets, and AI tools that compress the QA cycle have an immediately calculable ROI that studio finance teams can evaluate without reference to creative vision debates.
NPC dialogue and behaviour — the most-hyped application — remains the most problematic. Not because the technology isn’t impressive, but because player-facing AI behaviour interacts with narrative design, voice acting contracts, localisation, content rating requirements, and brand quality standards in ways that centralised AI teams cannot resolve unilaterally. These are cross-functional problems requiring cross-functional solutions that take years to negotiate. This sits in marked contrast to the rapid deployment happening in enterprise AI — as seen with Japan’s Physical AI rollout and foundation models like GEN-1 that don’t face the same creative-contractual constraints.
What Comes Next
Luke Dicken’s candour has had an unexpected effect: it has opened up a more honest industry conversation about where generative AI actually is in games, versus where the press releases claim it is. Several other game industry AI leaders have since shared similar experiences privately, suggesting that Take-Two’s situation is more representative than exceptional.
The structural prognosis, however, is optimistic for AI’s long-term role in game development — just on a longer timeline than 2024–2026 hype cycles implied. Games entering pre-production in 2026 are the first generation where AI tools can be genuinely planned into the pipeline from the start. By the time these titles ship in 2029–2031, the industry will have the first cohort of data on what AI-native game development actually looks like at scale.
For Microsoft’s gaming division — which owns Activision Blizzard alongside Xbox studios — the Take-Two lesson is directionally useful: Microsoft’s $10B Japan AI investment reflects an infrastructure-first approach that builds capability before trying to deploy it into production systems. The sequence matters as much as the technology.
Games entering pre-production in 2026 are the first generation that can genuinely plan AI into their pipelines from day one — without retrofitting into existing workflows or disrupting in-progress productions. The results won’t be visible until 2029–2031, but the decisions being made in studios today will determine whether the games industry’s AI transition is a success story or a cautionary tale.
Japan Is Betting Its Economic Future on Physical AI →
GEN-1: The Robotic Foundation Model Nobody Saw Coming →
Microsoft’s $10 Billion Japan AI Infrastructure Investment →
Frequently Asked Questions
Luke Dicken was Head of AI at Take-Two Interactive — the parent company of Rockstar Games, 2K, and Private Division. His role was to develop and deploy AI capabilities across Take-Two’s studio portfolio. He gained widespread attention in 2026 for a viral LinkedIn post describing the dissolution of his team in unusually candid terms.
According to Dicken’s public account, the team was building technically impressive AI tools but was “ahead of what the org was ready to absorb.” The dissolution reflected a strategic shift from a centralised AI function to an embedded model — where AI capability lives within individual studio teams rather than a separate group pushing technology into production pipelines.
No. Ubisoft and EA have both experienced significant AI project stalls and scaled-back ambitions. The challenges — long production cycles, union contract constraints from SAG-AFTRA’s 2023 strike AI protections, and the difficulty of integrating AI into in-progress productions — are structural across the triple-A games industry.
The AI applications gaining genuine traction in game development are procedural environment generation (varying terrain and assets without displacing artists), AI-assisted QA and playtesting (automating regression testing and boundary condition discovery), and backend analytics and personalisation. NPC dialogue and behaviour — the most-hyped application — remains the most complicated due to creative, contractual, and quality-control constraints.
Get the analysis that cuts through AI hype — in games and everywhere else that matters.