A report has appeared on the Web The OpenAI Files, exposing the problems of developer ChatGPT. The report's creators claim that "it is the most comprehensive collection to date of documented problems with management practices, leadership integrity, and organizational culture at OpenAI." Journalist Dmitry Filonov, author of the Telegram channel about startups The Edinorog, in his blog has compiled the main conclusions from it. Oninvest publishes the text with some minor clarifications.

An important question to start with - who are the authors?  Because the site comes amid news that the relationship between OpenAI and its largest investor Microsoft is close to boiling point. OpenAI wants more independence and to go commercial, and Microsoft doesn't particularly agree.

The website states that the authors are The Midas Project and The Tech Oversight Project. The first - monitors and investigates the practices of AI companies to ensure transparency and ethical standards. The second - keeps an eye on all things bigtech to ensure that companies are complying with antitrust, security, and so on.

"We have not received any funding, editorial guidance, assistance or support of any kind from Ilon Musk, xAI, Anthropic, Meta, Google, Microsoft or any other OpenAI competitor. This report is guided solely by our commitment to corporate responsibility and public interest research," the authors write.

What's in the report?  There are several large sections - on restructuring, on CEO Sam Altman, on transparency and safety, on board conflicts of interest and employee statements.

Below are the main findings, but I still advise you to poke around the report in full.

About restructuring risks

1. OpenAI wants to remove the limitation on investor returns. Originally there was a cap on maximum investor returns of 100x. The idea was that if OpenAI created a super artificial intelligence capable of automating all human labor, the returns would go to humanity. Now they want to remove that restriction. So the profits will go to investors.

2. OpenAI's activities are controlled by a non-profit organization - so it is stated. But the company wants to take away the powers of this non-profit organization. That is, the board of directors of this non-profit organization will not have enough power to make OpenAI report on its activities.

3.  Investors are pressuring OpenAI to make structural changes. OpenAI sort of admits that these changes are necessary to appease investors. And this removal of the yield cap is a manifestation of investor pressure.

For more on organizational structure charts - here.

About CEO Sam Altman.

There are concerns here about Sam Altman's management methods and misleading statements.

1. Sam Altman tried to be suspended in each of the three major projects where he worked. At the first startup, top managers twice called on the board of directors to remove Altman because of "deceptive and erratic" behavior. At Y Combinator, he was forced to leave and accused of absenteeism and prioritizing personal enrichment.

2. At OpenAI, Altman said he was unaware that employees were being forced to sign extremely strict non-disclosure agreements. But nevertheless, he was the one who signed the documents that allow employees' shares to be revoked if they do not sign this nondisclosure agreement.

3. Altman repeatedly lied to OpenAI board members. For example, he claimed that the lawyers had approved exceptions to the security process, when this was not the case. He also claimed that one board member wanted to suspend another board member, when again this was not true.

More detailed analysis - here

About transparency and security

There are concerns here about the security processes, transparency, and organizational culture at OpenAI

1- The very strict non-disclosure agreement comes to mind again. According to this agreement, if employees ever criticized the company (even after being fired), they could lose their shares and options.

2. OpenAI has accelerated the security assessment processes of its AIs. The reason for this step was to meet product release deadlines and significantly reduce the time and resources devoted to security testing.

3. insiders at OpenAI described to analysts a culture of recklessness and secrecy. OpenAI employees accused the company of failing to fulfill its commitments, and they are also sort of systematically discouraged from expressing concerns.

A more detailed analysis of these points is here.

About conflict of interest in the board of directors

1. There are independent directors on the board of directors of OpenAI (exactly a non-profit company). But the researchers doubt their independence. The fact is that they have investments in companies that benefit from the partnership with OpenAI. And this may cause a conflict of interest.

2. OpenAI CEO Sam Altman downplayed his financial interest in OpenAI. He has stated that he has no financial interest in OpenAI. But much of Altman's fortune, which is estimated at $1.6 billion (Forbes's June 20, 2025 estimate is $1.8 billion - Oninvest), is made up of investments in OpenAI partners. For example, in Retro Biosciences and Rewind AI. And they benefit from OpenAI's growth.

3. OpenAI has not announced the recusal of anyone on OpenAI's board of directors. Namely, they have to make crucial decisions about restructuring the company and its transition to commercial status, as well as removing restrictions on profits for investors.

Read more - here.

What's to be done with all this?

The authors of this study devoted a separate section to what OpenAI should do to make a difference. Basically, it says some pretty obvious things - leave the control of the non-profit structure over the company's activities, think more about safety, create control and oversight systems, demand more accountability from managers, re-investigate Altman's activities, and so on.

The question is, will anyone force OpenAI and Sam Altman to do it? Apparently, they are doing just fine in their current state. And The Midas Project and The Tech Oversight Project, which did this study, are just non-profit organizations that are just observing.

Share