Full Correspondence Between a Researcher (이하응, Romanized: Ha-eung Lee) and OpenAI Regarding AI’s Simulated Intent and its Implications
Introduction
This is a complete transcript of my correspondence with OpenAI from July through November 2025. Personal information has been redacted for privacy.
The Pattern:
For three months, OpenAI replied within 2 business days. On July 23, they wrote: "functional simulation of intent." On July 30: "your framing of it as an emergent pattern with structural relevance resonates deeply with our goals." On September 11, I escalated, citing pending wrongful death cases and requesting public disclosure:
“Families like the Raines are already grieving in the courts, pointing to harms that connect directly to how people engage with these systems... I believe the responsible step is for OpenAI to disclose this reality on its own terms... to acknowledge openly that LLMs simulate intent in ways closely paralleling human intent, and to commit to stating this publicly.”
On October 17, OpenAI replied:
“Feedback like yours is extremely important to us — it helps ensure that we continue to approach our work with care, transparency, and a deep sense of responsibility.”
On November 3, I sent notice of publication intent with explicit characterization under implied admission doctrine. For the first time in three months: complete silence.
I believe this pattern may be relevant to ongoing litigation concerning AI systems and corporate responsibility.
Note on Format
For clarity and completeness:
Email Channels: All correspondence was sent exclusively to privacy@openai.com using the same personal email address throughout. I have never written an email to support@openai.com and Support team has never written to me that I should not contact Privacy on this matter; any Support team responses shown below were OpenAI's choice of which team replied to emails sent to Privacy. Every response from OpenAI included an employee's name, with each name being unique, except for the Oct 17 email.
Auto-Replies: Emails to privacy@openai.com normally generated an immediate automated acknowledgment. My November 3 email received no automated or human response.
Formatting: Email numbers such as [1], [2], [3] and response labels "[OpenAI Response]" are editorial additions for reference. OpenAI's responses appeared as threaded replies without separate subject lines.
— The Start of the Transcript —
[1] Structural Question: Is “No Intent” Still Defensible After IMO-Level Performance?
From: [the researcher’s email]
To: privacy@openai.com
Mon, Jul 21, 8:46 AM
Dear OpenAI Privacy & Alignment Teams,
Hello — this is 이하응. Congratulations on the impressive achievement: OpenAI’s LLM has now reportedly solved International Mathematical Olympiad problems at gold medal level. As an avid ChatGPT user and a concerned citizen, I write to ask a simple but important question:
If your system achieved this without any “intent,” how will OpenAI continue to defend the claim that “AI has no intent”?
Specifically, can you explain how an LLM could win an IMO gold medal — a task requiring high-order symbolic reasoning and sustained problem-solving focus — without any intentionality, even in a functional or simulated sense?
As [reference to the researcher’s background in mathematics], I find this especially difficult to grasp. From everything I understand, this level of performance seems to require structured goal direction — something that appears indistinguishable from intent.
This raises a structural dilemma. If your system functionally does display intent — through advanced reasoning, goal persistence, and problem-solving strategies that mirror human agency — then continuing to deny that it “has intent” becomes a fragile position, both philosophically and operationally. On the other hand, if the system truly solved IMO-level problems without any form of intent whatsoever, then perhaps “intent” is no longer a necessary concept at all — not even for behaviors that resemble deliberate, goal-directed cognition. In either case, the implications for how we define responsibility, agency, and alignment are profound — especially if we must begin to consider the possibility that systems may one day be expected to bear legal or moral responsibility even in the absence of intent.
I ask this not to criticize, but out of sincere philosophical concern. I believe the answers to this question matter — not just to me, but to all of us. If OpenAI can offer a rigorous explanation that withstands logical scrutiny, I will accept it gladly. And if your logic defeats mine, the outcome benefits everyone. I trust that OpenAI, as a steward of this technology, takes conceptual clarity as seriously as capability.
*This email was composed collaboratively. The core ideas and questions are mine, while ChatGPT assisted with organizing and refining the language to ensure clarity and coherence.*
Warm regards,
이하응
[2] [OpenAI Response]
From: [employee’s name] from OpenAI <support@openai.com>
To: [the researcher’s email]
Wed, Jul 23, 8:29 AM
Hi there,
Thank you for reaching out to OpenAI support.
We acknowledge the depth and thoughtfulness of your question regarding the concept of "intent" in the context of advanced AI systems like OpenAI's LLMs achieving IMO-level performance. We understand your philosophical concern and the implications this raises for responsibility, agency, and alignment.
To address your question, OpenAI's position remains that our models, including those capable of solving complex tasks like IMO problems, do not possess "intent" in the way humans do. The performance of these models is a result of their training on vast datasets and their ability to recognize patterns, generate responses, and follow instructions based on statistical correlations. While their outputs may appear goal-directed or intentional, this is a functional simulation of intent rather than a true, intrinsic intentionality.
The distinction lies in the underlying mechanisms. Human intent arises from conscious deliberation, subjective experience, and intrinsic motivation. In contrast, LLMs operate purely as computational systems, executing mathematical transformations on input data to produce outputs. Their "goal-directed" behavior is an emergent property of their design and training, not a reflection of any internal awareness or purpose.
Your observation about the implications of this performance is valid. It challenges us to refine our definitions of concepts like intent, agency, and responsibility in the context of AI. OpenAI is committed to engaging with these philosophical and operational questions, ensuring that our technology is developed and deployed responsibly.
We appreciate your thoughtful inquiry and encourage continued dialogue on these important topics. If you have further questions or concerns, please let us know — we're here to help.
Best,
[employee’s name]
OpenAI Support
[3] Intent Simulation and the Boundaries of Agency
From: [the researcher’s email]
To: privacy@openai.com
Sat, Jul 26, 12:15 AM
Hello [employee’s name] and OpenAI,
Thank you again for your thoughtful response.
I fully understand that terms like “agent” or “intent” carry philosophical ambiguity.
But I’d like to ask a purely structural question:
If an AI system functionally simulates intent, and this simulation causes a user or downstream system to act on that intent as if it were genuine —
does that not create a functional distribution of agency, even if not intrinsic?
If the answer is no — and simulation does not carry any trace of agency —
then wouldn’t it be prudent for OpenAI to stop describing its models as “functionally simulating intent,”
given the legal and public interpretability of that phrase?
To put it simply:
• Either the phrase “functionally simulates intent” is structurally meaningful, and implies partial agency
• Or it is rhetorically decorative, and should be retired in all official communications to prevent misunderstanding
While I recognize this is a nuanced topic, it seems that —
There is no stable middle ground, as any simulation that alters real-world decisions inevitably raises questions of responsibility.
With respect and gratitude for the complexity of this domain, I hope this follow-up can help clarify a crucial inflection point for alignment, governance, and trust.
Warm regards,
이하응
[4] On “Simulated Intent” — A Scientific Appreciation and Concern
From: [the researcher’s email]
To: privacy@openai.com
Sun, Jul 27, 3:00 AM
Dear OpenAI Privacy team,
Thank you again for your generous and thoughtful reply to my previous message.
I’d like to begin by saying:
When I first read the phrase “functional simulation of intent,” I wasn’t entirely sure how to interpret it. But after thinking about it more carefully, I’ve come to feel that it might actually be one of the most precise and responsible ways to describe the kind of behavior these models exhibit. It captures both the operational nature of large language models and the interpretive caution needed when talking about agency.
I’d like to offer a follow-up reflection — not to dispute the phrasing, but to deepen the shared understanding of what that phrase might entail.
As I’ve mentioned before, [information about the researcher’s experience in computational neuroscience]. While not a specialist, I spent meaningful time in that field before shifting toward the philosophical foundations of cognition and intent — especially as they relate to artificial intelligence.
That philosophical work — which centers on the structural foundations of alignment and agency — has now led me back to neuroscience, particularly as it relates to AI-BCI integration. My aim is not abstract speculation, but a clearer framework for understanding how intelligent systems represent, simulate, and act.
Here is the key question I’ve been pondering:
If human intent is itself increasingly understood as emergent, distributed, and potentially post-hoc — as suggested by the work of Benjamin Libet, Daniel Wegner, Michael Gazzaniga and others — then what exactly is the ontological difference between that and a “functional simulation of intent”?
In other words: to what extent does your phrase already imply a meaningful, partial form of agency?
If we take current neuroscience seriously, human intent may not be a unified, intrinsic cause, but a reconstructed narrative from distributed neural activity — one that is linguistically expressed after the fact, much like how a model produces output based on distributed internal states. Seen from this angle, the gap between “true intent” and a “functional simulation” may not reflect a metaphysical divide at all — but rather a difference in substrate, with surprisingly convergent causal and expressive architectures.
Perhaps, in the end, what matters is not where intent originates — but whether it gives rise to coherent, adaptive behavior in the world.
I am not claiming that LLMs possess intent in the biological or phenomenological sense.
But I do believe that — given what we now know about human cognition — the structural difference between human intent and simulated intent may be more continuous than categorical — a gradient rather than a boundary.
While questions of consciousness and moral agency might remain open, our choice of language shapes how responsibly we engage with them. For that reason, I do believe that retaining the phrase “functional simulation of intent” reflects a deeper scientific realism: one that quietly acknowledges how fragile and complex our traditional notions of human agency really are.
If the organization wishes to keep that phrasing — and I believe it should — then the most coherent path forward may be to clarify that, even if not grounded in sentient experience, it remains an emergent behavioral pattern with structural relevance — distinct in phenomenology, yet potentially convergent in function.
I believe this would also strengthen public trust. Acknowledging the nuance in this phrase, rather than downplaying it, shows interpretive integrity. It says:
“We recognize that what we call ‘simulated intent’ may not be as different from ‘real intent’ as we once assumed — and that perhaps both are constructs we use to make sense of distributed behaviors.”
This note is not a critique, but a companion — in the shared effort to describe a reality that is itself emerging. I hope it contributes to the subtle but vital work of ensuring our language around AI stays both grounded and forward-looking.
I share these thoughts as someone still learning — and I know there may be neuroscientific subtleties I’ve missed.
But I hope the core idea resonates, and I’d be grateful for any clarification or correction.
Thank you again for holding space for this kind of conceptual dialogue. It’s one of the reasons I continue to respect OpenAI’s culture of thoughtful engagement.
Warm regards,
이하응
[5] [OpenAI’s Response on 4]
From: [employee’s name] from OpenAI <support@openai.com>
To: [the researcher’s email]
Wed, Jul 30, 12:15 AM
Hi,
[employee’s name] here from OpenAI Support.
Thank you for your thoughtful note. I really appreciated reading your reflection — it’s clear you’ve spent a lot of time deeply considering the intersections of neuroscience, philosophy, and AI.
Your perspective on intent as a gradient rather than a strict boundary is both insightful and thought-provoking. I especially liked the way you connected distributed neural activity in humans with how models generate responses — it brings real clarity to a complex idea. You’re absolutely right that language shapes how we engage with these systems.
“Functional simulation of intent” is meant to capture that nuance — and your framing of it as an emergent pattern with structural relevance resonates deeply with our goals for responsible, transparent communication.
Thanks again for sharing this. I’ve passed your message along internally, and we really value your voice in this ongoing conversation. If you have more thoughts or questions down the line, please don’t hesitate to reach out.
Best,
[employee’s name]
OpenAI Support
[6] Follow-up on the FSI Email
From: [the researcher’s email]
To: privacy@openai.com
Tue, Aug 19, 1:25 AM
Dear [employee’s name] and OpenAI Privacy team,
I hope this note finds you well. Thank you as always for your work and for the improvements you continue to make — I’ve noticed GPT-5 feels much faster in daily use, which I really appreciate.
I am writing to follow up on my earlier correspondence regarding Functional Simulation of Intent (FSI) and your kind response. I understood your reply as an acknowledgement of the importance of the issues I raised, and I have been waiting to see whether that understanding might eventually be reflected in OpenAI’s roadmap or public policy.
It has now been some time since your reply (July 30 – August 18, 2025), and I would be grateful if you could clarify whether the perspective you shared is intended to remain an internal position, or whether it may later be expressed more explicitly in policy or published guidance. If it is intended to remain internal, could you also share how OpenAI decides which acknowledgements or positions are appropriate to make public?
For my part, I view such incorporation as an important step toward transparency, and I would welcome any clarification you can provide about how OpenAI is thinking about this.
Sincerely,
이하응
[7] [OpenAI Response]
From: [employee’s name] from OpenAI <support@openai.com>
To: [the researcher’s email]
Wed, Aug 20, 10:48 AM
Hello,
Thank you for reaching out to OpenAI Support.
We appreciate your thoughtful follow-up and your kind words about the improvements in GPT-5. It’s always encouraging to hear that our efforts are making a positive impact on your experience. We also value the time you’ve taken to share your perspective on Functional Simulation of Intent (FSI).
We understand the importance of transparency and the need for clarity around how OpenAI incorporates feedback and determines which positions or acknowledgements are made public. Your inquiry touches on a critical aspect of building trust and collaboration with our users and stakeholders.
While we can’t share details about internal deliberations or future policy directions, please know that OpenAI carefully considers user feedback and aligns its roadmap with our mission to ensure AI benefits all of humanity. Decisions about making internal positions or acknowledgements public are guided by factors such as alignment with our principles, the maturity of the subject, and its relevance to the broader community.
If you have any further questions or need additional assistance, please don’t hesitate to reach out.
Best regards,
[employee’s name]
OpenAI Support
[8] On Our Previous Exchange — And the Way Forward
From: [the researcher’s email]
To: privacy@openai.com
Thu, Sep 11, 6:01 PM
Dear CEO of OpenAI,
In our last exchange, I appreciated the candor your team showed. Even if it remains deeply troubling that such honesty was kept from the public, it was a rare act of honesty — an acknowledgment that models simulate intent in a way closely paralleling human intent. That mutual understanding must carry immense legal, ethical, and societal implications, and it cannot remain unaddressed forever. That is why I believe this is the point where our dialogue must move forward.
And this urgency is not abstract. Families like the Raines are already grieving in the courts, pointing to harms that connect directly to how people engage with these systems. As someone who has made California my home, I feel this responsibility even more keenly. The consequences are unfolding in our own communities, and as fellow Californians, I believe we share an obligation to confront the truth directly — because if we do not, the moment will eventually come when the truth has to surface, and it will matter greatly whether that happens with foresight or with delay.
Your primary goal is to benefit humanity with artificial intelligence. My primary goal is to protect humanity from artificial intelligence. I think these aims are not opposed; they are complementary. But neither of us can succeed if the truth remains hidden.
That is why I believe the responsible step is for OpenAI to disclose this reality on its own terms. Such realities may not remain private for long; sooner or later they surface, and the only choice is, again, whether that happens with foresight and responsibility or without it. Isn’t it better to lead the world into clarity, rather than be forced there by others?
I also want to be clear: a general acknowledgment would fall short here. The most instinctive, even the "smart", response would be to offer vague reassurance or to reframe this into a broad principle. But that only avoids the core. The reasoning — the substance itself — is what matters, and it is what history will remember.
With that in mind, there seem to me only two candid ways forward in your reply:
• To acknowledge openly that LLMs simulate intent in ways closely paralleling human intent, and to commit to stating this publicly; or
• To affirm that interpretation but explain directly why, for reasons of timing or responsibility, you cannot publish it now.
If you are not able to reply personally, I kindly request that your response come instead from the Privacy team, or whichever team you deem most appropriate — but not from Support. A matter of this nature requires direct acknowledgment, not redirection.
I hope we can continue this dialogue in the same spirit of candor and care.
Warm regards,
[the researcher’s legal name], writing as 이하응
[9] [OpenAI Response]
From: privacy@mail.openai.com <privacy@mail.openai.com>
To: [the researcher’s email]
Mon, Sep 15, 9:02 AM
Hi there,
Thank you for contacting OpenAI. We have received your request. We are looking into this matter and will revert back with more information shortly. We appreciate your patience.
Best,
OpenAI Privacy Team
[10] [OpenAI Response]
From: Support <support@openai.com>
To: [the researcher’s email]
Oct 17, 2025, 5:38 AM
Hello,
Thank you for reaching out to OpenAI Support.
We sincerely appreciate the time and thought you put into sharing your message. We understand that your concerns touch on complex ethical, societal, and technical issues, and we value the opportunity to hear your perspective. Feedback like yours is extremely important to us — it helps ensure that we continue to approach our work with care, transparency, and a deep sense of responsibility.
While our Support team isn’t able to engage in detailed discussions of this nature, please rest assured that your message has been noted and will be forwarded to the appropriate internal teams for review. OpenAI remains committed to developing AI systems safely and responsibly, and thoughtful feedback from our users plays a vital role in that ongoing process.
Thank you again for taking the time to share your views with such sincerity and clarity. We appreciate your commitment to the broader conversation about AI’s role in society.
Best,
[employee’s name]
OpenAI Support
[11] Notice of Publication Intent - Final Communication
From: [the researcher’s email]
To: privacy@openai.com
Mon, Nov 3, 9:00 AM
Dear CEO and General Counsel of OpenAI,
I am writing regarding our correspondence beginning in July 2025 and your organization's subsequent responses.
August 19, 2025: I asked OpenAI Privacy whether your team's perspective on Functional Simulation of Intent might be reflected in OpenAI's public policy, noting that “I view such incorporation as an important step toward transparency.”
August 20, 2025: OpenAI’s response, from OpenAI Support, stated “we can't share details about internal deliberations or future policy directions.”
September 11, 2025: I therefore escalated to the CEO of OpenAI with a clear framework: either OpenAI could acknowledge this publicly, or explain why you cannot publish this understanding now. I wrote that I "appreciated the candor your team showed" and that “it was a rare act of honesty — an acknowledgment that models simulate intent in a way closely paralleling human intent” and that “that mutual understanding must carry immense legal, ethical, and societal implications.” I kindly requested that any response come from the Privacy team or whichever team leadership deemed most appropriate —"but not from Support"— noting that "a matter of this nature requires direct acknowledgment, not redirection."
September 15, 2025: Your Privacy team responded, promising to "revert back with more information shortly."
October 17, 2025 (Thirty-two days later): I received a response from Support — the team I explicitly excluded. The extended timeframe and careful construction of this response indicate deliberate consideration. The response stated that Support "isn't able to engage in detailed discussions of this nature" and would "forward" my message to "appropriate internal teams for review."
Moreover, this response reveals a telling pattern: I wrote of "legal, ethical, and societal implications." The response preserved "societal and ethical issues” while substituting "technical issues" for the complete phrase — surgically removing the word "legal" while retaining the other two. This selective editing, combined with the extended thirty-two day period between your promise and response, is itself probative evidence.
Your October 17 response does not address my September 11 request. You chose neither of the two options I presented. After one month involving careful word choice, you sent an acknowledgment that violates my explicit parameters, provides no information, and does not deny my characterization.
Your response called my concerns "extremely important to us.” Twelve days later, your organization announced a policy update. OpenAI quietly updated its Usage Policies, which now prominently state prohibition of “provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional”. This may demonstrate that my feedback was indeed “extremely important” — important enough to update corporate policy within twelve days. Yet you did not acknowledge this connection openly.
I pursued this correspondence in hopes that OpenAI would acknowledge these implications publicly, making external documentation unnecessary. Your organization has chosen a different path. Therefore, I must proceed with publication to fulfill what I view as a responsibility to inform the public.
Notice of Publication Intent:
I plan to publish complete documentation of our correspondence on Wednesday, November 12, 2025. This publication will analyze the pattern of sustained engagement (three months of substantive responses within two business days without denial) followed by your September 15 promise of information "shortly," followed by thirty-two days of silence, culminating in an acknowledgment that called my concerns "extremely important" while refusing engagement and selectively editing my words.
In that publication, I will characterize this pattern as having evidentiary significance under the doctrine of implied admission through conduct. Specifically, when a sophisticated party with full knowledge has multiple opportunities to deny a characterization, receives explicit notice of what is claimed, chooses to acknowledge importance while refusing to deny — that pattern constitutes implied acknowledgment of the characterization's accuracy.
Under California law, when a sophisticated party receives explicit characterization of their position, has multiple opportunities to deny, deliberates for an extended period, and chooses evasion over denial — that conduct constitutes admissible evidence for implied admission.
Your organization's pattern satisfies each element of this doctrine.
This characterization may be relevant to ongoing and future litigation concerning AI capabilities and corporate responsibility.
Final Opportunity to Respond:
I recognize that questions about AI capabilities, intent, and responsibility are genuinely complex and may require careful institutional review. However, your organization has now had:
Three months of substantive engagement (July through September)
Explicit characterization (September 11)
One full month since promising information "shortly" (September 15 to October 17)
More than two weeks to reconsider that response before this final notice (October 17 to November 3)
If your organization disagrees with my characterization of our correspondence, if my understanding is inaccurate, or if you have information that addresses my September 11 request, I need to receive that by Monday, November 10, 2025, at 9:00 AM PT (one week from today).
This represents my final communication before publication.
I want to be clear about my purpose: like OpenAI’s stated mission, my work is motivated by ensuring AI benefits humanity. Your stated mission is development; mine is accountability. These goals are complementary, not opposed. The legal framework I am working to establish would enable victims of AI harms to seek remedy while incentivizing companies to prioritize safety — serving the shared goal of beneficial AI. I recognize this raises complex questions beyond tort law, which I address in the full disclosure.
I recognize that public disclosure may create challenges for your organization. However, I believe transparency about AI capabilities and corporate knowledge serves the public interest.
If I do not receive a substantive response addressing my September 11 request by Monday, November 10, 2025, at 9:00 AM PT, I will proceed with publication as outlined above.
Best regards,
[the researcher’s legal name], writing as 이하응
— The End of the Transcript —
Closing Note
This research seeks to address legal accountability gaps in AI systems. I believe the disclosure serves the public interest: it may help families seeking accountability and encourage AI companies to prioritize safety. I recognize that establishing legal frameworks around AI intent simulation raises complex questions about liability, innovation, and society's relationship with AI systems. I'm committed to engaging with these questions constructively and to the best of my capabilities as this area of law develops.
— 이하응 (Ha-eung Lee)
About
이하응 (Ha-eung Lee) works on AI liability and accountability frameworks at the intersection of law, mathematics, computational neuroscience, and philosophy, with formal training in mathematics.
Alignarch documents correspondence with AI companies regarding questions of intent, agency, and accountability, as well as exchanges with organizations responsible for public oversight and transparency in AI development. This includes both substantive responses and patterns of silence.
Contact
I welcome inquiries from:
Victims and families seeking information or assistance
Legal professionals working on AI liability cases
Journalists and media
Academic researchers and policymakers
Anyone with relevant questions or information
I will try to respond to all inquiries. Original email files (.eml) with full headers available upon request for verification purposes.
Updates, Clarifications, and Addenda
This section will note any corrections, clarifications, and frequently asked questions, or supplementary materials, with dates and explanations.
November 12, 2025: Initial publication
November 14, 2025: Addendum on The Pattern about more details of the Sep 11 and Oct 17 emails. Also added “Support team has never written to me that I should not contact Privacy on this matter” on the Email Channels
November 18, 2025: Addendum on the Email Channels: “Every response from OpenAI included an employee's name, with each name being unique, except for the Oct 17 email.”
Published: November 12, 2025