OpenAI is an artificial intelligence (AI) research laboratory consisting of the for-profit corporation OpenAI LP and its parent company, the non-profit OpenAI Inc. The company, considered a competitor to DeepMind, conducts research in the field of AI with the stated goal of promoting and developing friendly AI in a way that benefits humanity as a whole.
The organization was founded in San Francisco in late 2015 by Elon Musk, Sam Altman, and others, who collectively pledged US$1 billion. Musk resigned from the board in February 2018 but remained a donor. In 2019, OpenAI LP received a US$1 billion investment from Microsoft.
History
In December 2015, Elon Musk, Sam Altman, and other investors announced the formation of OpenAI and pledged over US$1 billion to the venture. The organization stated they would "freely collaborate" with other institutions and researchers by making its patents and research open to the public.
On April 27, 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research.
On December 5, 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications.
On February 21, 2018, Musk resigned his board seat, citing "a potential future conflict (of interest)" with Tesla AI development for self driving cars, but remained a donor.

Motives
Some scientists, such as Stephen Hawking and Stuart Russell, have articulated concerns that if advanced AI someday gains the ability to re-design itself at an ever-increasing rate, an unstoppable "intelligence explosion" could lead to human extinction. Musk characterizes AI as humanity's "biggest existential threat." OpenAI's founders structured it as a non-profit so that they could focus its research on creating a positive long-term human impact.
Musk and Altman have stated they are motivated in part by concerns about the existential risk from artificial general intelligence. OpenAI states that "it's hard to fathom how much human-level AI could benefit society," and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly". Research on safety cannot safely be postponed: "because of AI's surprising history, it's hard to predict when human-level AI might come within reach." OpenAI states that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible...", and which sentiment has been expressed elsewhere in reference to a potentially enormous class of AI-enabled products: "Are we really willing to let our society be infiltrated by autonomous software and hardware agents whose details of operation are known only to a select few? Of course not." Co-chair Sam Altman expects the decades-long project to surpass human intelligence.
Ronit Banerjee Reply
nice post . Thank you for posting something like this keep up the good work