Home » today » Business » Capital achieves another victory in the field of artificial intelligence

Capital achieves another victory in the field of artificial intelligence

After a long string of victories, Capital has just achieved another major victory in the field of AI ethics. The controversy surrounding the abrupt firing and rehiring of OpenAI CEO Sam Altman demonstrated the failure of a non-profit company that aimed to prioritize AI safety over profits to gain control of its for-profit subsidiary.

OpenAI was founded in 2015 with the goal of ensuring that artificial general intelligence — the collection of autonomous systems that can outperform humans at all or most tasks — does not spiral out of control, if or when it comes into being.

The capabilities of artificial general intelligence raise the same problem that Mary Shelley posed in “Frankenstein.” Our own innovation may destroy us, but who can stop anyone from seeking the fame, power and fortune that can accompany “success”? What happened with Altman offers us one answer: We cannot rely on ethical rules, corporate governance structures, or even principled board members to keep us safe.

They tried as hard as they could, but it wasn’t enough. Initially, Open AI sought to raise enough money through donations to compete in a rapidly evolving and highly competitive field. But by generating only $130 million in three years, it remained far from its $1 billion goal. It will need to turn to private capital while trying to maintain its original mission within a complex governance structure. This means creating two for-profit subsidiaries, one of which, a wholly owned limited liability company, assumes the role of general (managing) partner of the sister company within the limited partnership. Since limited partners do not have voting rights, Open AI exercised full control over the partnership, at least in theory. The limited partner then created his own limited liability company, Open AI Global LLC, to attract private capital, including a $13 billion investment from Microsoft, which did not have formal control rights.

Ultimately, the company maintained the original mission by appointing several members of the nonprofit’s original board of directors to also serve as employees of Open AI Global LLC, including appointing Altman as CEO of the company. Where could the error lie? In every way, as it turns out. When the board of directors decided to fire the CEO of his secondary subsidiary – apparently because of what the majority of its members saw as a conflict between his ambitions and the company’s mission – the entire structure collapsed. Microsoft jumped in and offered to hire Altman and anyone else willing to join it. This has put Open AI’s financial future in jeopardy.

As it warns in its operating agreement, “An investment in Open AI Global LLC is a high-risk investment. “Investors may lose their financial contributions and not receive any return from them.” These warnings were not a deterrent to Microsoft, which cared more about Open AI products and the people who develop them than profits.

Although Altman has been reappointed to Open AI, with a new board of directors that seems more likely to implement his requests, it is safe to assume that Microsoft will be the one who will ultimately make the decisions. In any case, Altman owes his job and the future of the company he runs to Microsoft.

Given all the media coverage this controversy has generated, it’s nothing new. Throughout history, capital usually wins out when there are competing visions for the future of an innovative product or business model.

Take, for example, all the ambitious promises made by private companies to tackle climate change (perhaps in the hope of avoiding regulation or worse).

In 2022, Larry Fink, CEO of BlackRock, the world’s largest asset manager, predicts a “tectonic shift” toward sustainable investment strategies. But he quickly retracted that. Having demoted climate change from an investment strategy to a mere risk factor, BlackRock now prides itself on ensuring “corporate sustainability.” If the board of a nonprofit committed (in writing) to the safety of AI can’t protect the world from its CEO, then we shouldn’t bet on the CEO of a for-profit asset management company to save us from climate change.

Another example is the long history of broken promises for profits from private money creation. Every form of money is credit, but there is a difference between reciprocal credit, or government money, and private credit, or private money. Most of the money in circulation is private money, including bank deposits, credit cards, and the list is long. Private money owes its success to government money. Without the state’s willingness to maintain central banks to ensure the stability of financial markets, those markets and intermediaries will fail again and again, leading to the collapse of the real economy.

States and banks are the oldest example of “public-private partnerships,” and promise benefits for both banks and society. But winners like to have everything, and banks are no exception. It has had the great privilege of controlling the financial systems, with the state supporting this system in times of crisis.

While other intermediaries have learned how to take advantage of the system, few countries are willing to give back control, fearing it could lead to capital flight. As a result, the financial system has grown to the point that central banks will not avoid calling for further financial relief when a crisis looms. This dynamic continues; Because government decisions are dictated by capital pressures, not the other way around.

It is not surprising that Open AI has failed to advance its mission. If states cannot protect their citizens from the harms caused by capital, how can a small non-profit organization with a few dedicated board members do it?

Catharina Pistor is Professor of Comparative Law at Columbia Law School and author of The Law of Capital: How Law Creates Wealth and Inequality.

Project Syndicate service

2023-12-03 16:48:53
#Capital #achieves #victory #field #artificial #intelligence

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.