GPT-2

Generative Pre-trained Transformer 2 (GPT-2)
Original author(s)OpenAI
Initial release14 February 2019; 5 years ago (14 February 2019)
Repositoryhttps://github.com/openai/gpt-2
PredecessorGPT-1
SuccessorGPT-3
Type
LicenseMIT[1]
Websiteopenai.com/blog/gpt-2-1-5b-release/

Generative Pre-trained Transformer 2 (GPT-2) is a large language model by OpenAI and the second in their foundational series of GPT models. GPT-2 was pre-trained on a dataset of 8 million web pages.[2] It was partially released in February 2019, followed by full release of the 1.5-billion-parameter model on November 5, 2019.[3][4][5]

GPT-2 was created as a "direct scale-up" of GPT-1[6] with a ten-fold increase in both its parameter count and the size of its training dataset.[5] It is a general-purpose learner and its ability to perform the various tasks was a consequence of its general ability to accurately predict the next item in a sequence,[2][7] which enabled it to translate texts, answer questions about a topic from a text, summarize passages from a larger text,[7] and generate text output on a level sometimes indistinguishable from that of humans, however it could become repetitive or nonsensical when generating long passages.[8] It was superseded by the GPT-3 and GPT-4 models, which are no longer open source.

GPT-2 has, like its predecessor GPT-1 and its successors GPT-3 and GPT-4, a generative pre-trained transformer architecture, implementing a deep neural network, specifically a transformer model,[6] which uses attention instead of older recurrence- and convolution-based architectures.[9][10] Attention mechanisms allow the model to selectively focus on segments of input text it predicts to be the most relevant.[11][12] This model allows for greatly increased parallelization, and outperforms previous benchmarks for RNN/CNN/LSTM-based models.[6]

Training

Since the transformer architecture enabled massive parallelization, GPT models could be trained on larger corpora than previous NLP (natural language processing) models. While the GPT-1 model demonstrated that the approach was viable, GPT-2 would further explore the emergent properties of networks trained on extremely large corpora. CommonCrawl, a large corpus produced by web crawling and previously used in training NLP systems,[13] was considered due to its large size, but was rejected after further review revealed large amounts of unintelligible content.[2][13] Instead, OpenAI developed a new corpus, known as WebText; rather than scraping content indiscriminately from the World Wide Web, WebText was generated by scraping only pages linked to by Reddit posts that had received at least three upvotes prior to December 2017. The corpus was subsequently cleaned; HTML documents were parsed into plain text, duplicate pages were eliminated, and Wikipedia pages were removed (since their presence in many other datasets could have induced overfitting).[2]

While the cost of training GPT-2 is known to have been $256 per hour,[14][15] the amount of hours it took to complete training is unknown; therefore, the overall training cost cannot be estimated accurately.[16] However, comparable large language models using transformer architectures have had their costs documented in more detail; the training processes for BERT and XLNet consumed, respectively, $6,912 and $245,000 of resources.[15]

Release

GPT-2 was first announced on 14 February 2019. A February 2019 article in The Verge by James Vincent said that, while "[the] writing it produces is usually easily identifiable as non-human", it remained "one of the most exciting examples yet" of language generation programs:[17]

Give it a fake headline, and it’ll write the rest of the article, complete with fake quotations and statistics. Feed it the first line of a short story, and it’ll tell you what happens to your character next. It can even write fan fiction, given the right prompt.[17]

The Guardian described this output as "plausible newspaper prose";[8] Kelsey Piper of Vox said "one of the coolest AI systems I’ve ever seen may also be the one that will kick me out of my job".[18] GPT-2's flexibility was described as "impressive" by The Verge; specifically, its ability to translate text between languages, summarize long articles, and answer trivia questions were noted.[17]

A study by the University of Amsterdam employing a modified Turing test found that at least in some scenarios, participants were unable to distinguish poems generated by GPT-2 from those written by humans.[19]

Restrictions and partial release

While "Skub" is not a real product, even the reduced-size model used in DistilGPT2 is capable of creating plausible arguments both for and against it.

While previous OpenAI models had been made immediately available to the public, OpenAI initially refused to make a public release of GPT-2's source code when announcing it in February, citing the risk of malicious use;[8] limited access to the model (i.e. an interface that allowed input and provided output, not the source code itself) was allowed for selected press outlets on announcement.[8] One commonly-cited justification was that, since generated text was usually completely novel, it could be used by spammers to evade automated filters; OpenAI demonstrated a version of GPT-2 fine-tuned to "generate infinite positive – or negative – reviews of products".[8]

Another justification was that GPT-2 could be used to generate text that was obscene or racist. Researchers such as Jeremy Howard warned of "the technology to totally fill Twitter, email, and the web up with reasonable-sounding, context-appropriate prose, which would drown out all other speech and be impossible to filter".[17] The Allen Institute for Artificial Intelligence, in response to GPT-2, announced a tool to detect "neural fake news".[20]

However, opinion was divided. A February 2019 article in The Verge argued that the threat posed by GPT-2 had been exaggerated;[21] Anima Anandkumar, a professor at Caltech and director of machine learning research at Nvidia, said that there was no evidence that GPT-2 had the capabilities to pose the threats described by OpenAI, and that what they did was the "opposite of open", characterizing their refusal to release the full model as "malicious BS".[21] The Gradient published an open letter to OpenAI requesting that they release the model publicly, comparing the threat posed by text-generation AI to the threat posed by the printing press, and giving Photoshop as an example of "a technology that has (thankfully) not destroyed modern society despite its potential for chaos":[22]

Thirty years later, society has emerged relatively unscathed despite Photoshop being simple enough for high school students to use and ubiquitous enough to commandeer its own verb. Why? Precisely because everyone knows about Photoshop.[22]

774M release

While OpenAI did not release the fully-trained model or the corpora it was trained on, description of their methods in prior publications (and the free availability of underlying technology) made it possible for GPT-2 to be replicated by others as free software; one such replication, OpenGPT-2, was released in August 2019, in conjunction with a freely licensed version of WebText called OpenWebText. The cloud compute costs for OpenGPT-2 were given as approximately $50,000.[23]

On August 20, 2019, OpenAI released a partial version of GPT-2, with 774 million parameters (roughly half the size of the full 1.5 billion parameter model).[24]

Full 1.5B release

Initial concerns that GPT-2 would lend itself to widespread misuse did not come to pass; The Verge said that "there are reasons to be skeptical about claims that AI technology will usher in some sort of ‘infopocalypse.’ For a start, we already have programs that can generate plausible text at high volume for little cost: humans."[25] By November 2019, OpenAI said that they had "seen no strong evidence of misuse so far", and the full version, with 1.5 billion parameters trained with forty gigabytes of data, "about eight thousand times larger than the collected works of Shakespeare",[26] was released on November 5, 2019.[3][4]

Small and Medium Releases

Two other smaller releases of GPT-2 are available, including the small version of 117M parameters and the medium size of 355M parameters. Both are available to download from Huggingface.[27][28]

Limitations

GPT-2 can generate thematically-appropriate text for a range of scenarios, even surreal ones like a CNN article about Donald Trump giving a speech praising the anime character Asuka Langley Soryu. Here, the tendency to generate nonsensical and repetitive text with increasing output length (even in the full 1.5B model) can be seen; in the second paragraph, grammar begins to deteriorate, and the output eventually becomes one incoherent sentence repeated over and over.

While GPT-2's ability to generate plausible passages of natural language text were generally remarked on positively, its shortcomings were noted as well, especially when generating texts longer than a couple paragraphs; Vox said "the prose is pretty rough, there’s the occasional non-sequitur, and the articles get less coherent the longer they get".[18] The Verge similarly noted that longer samples of GPT-2 writing tended to "stray off topic" and lack overall coherence;[17] The Register opined that "a human reading it should, after a short while, realize something's up", and noted that "GPT-2 doesn't answer questions as well as other systems that rely on algorithms to extract and retrieve information."[14]

GPT-2 deployment is resource-intensive; the full version of the model is larger than five gigabytes, making it difficult to embed locally into applications, and consumes large amounts of RAM. In addition, performing a single prediction "can occupy a CPU at 100% utilization for several minutes", and even with GPU processing, "a single prediction can take seconds". To alleviate these issues, the company Hugging Face created DistilGPT2, using knowledge distillation to produce a smaller model that "scores a few points lower on some quality benchmarks", but is "33% smaller and twice as fast".[citation needed]

Application and subsequent research

Even before the release of the full version, GPT-2 was used for a variety of applications and services, as well as for entertainment. In June 2019, a subreddit named r/SubSimulatorGPT2 was created in which a variety of GPT-2 instances trained on different subreddits made posts and replied to each other's comments, creating a situation where one could observe "an AI personification of r/Bitcoin argue with the machine learning-derived spirit of r/ShittyFoodPorn";[25] by July of that year, a GPT-2-based software program released to autocomplete lines of code in a variety of programming languages was described by users as a "game-changer".[29]

In 2019, AI Dungeon was launched, which used GPT-2 to generate dynamic text adventures based on user input.[30] AI Dungeon now offers access to the largest release of GPT-3 API as an optional paid upgrade, the free version of the site uses the 2nd largest release of GPT-3.[31] Latitude, the company formed around AI Dungeon, raised $3.3 million in seed funding in 2021.[32] Several websites host interactive demonstrations of different instances of GPT-2 and other transformer models.[33][34][35]

In February 2021, a crisis center for troubled teens announced that they would begin using a GPT-2-derived chatbot to help train counselors by allowing them to have conversations with simulated teens (this use was purely for internal purposes, and did not involve having GPT-2 communicate with the teens themselves).[36]

On May 9, 2023, OpenAI released a mapped version of GPT-2. OpenAI used successor model, GPT-4, to map each neuron of GPT-2 to determine their functions.[37]

Performance and evaluation

GPT-2 writing a fictional news article about Edward Snowden's actions after winning the 2020 United States presidential election (all highlighted text is machine-generated). While Snowden had (at the time of generation) never been elected to public office, the generated sample is grammatically and stylistically valid.

GPT-2 became capable of performing a variety of tasks beyond simple text production due to the breadth of its dataset and technique: answering questions, summarizing, and even translating between languages in a variety of specific domains, without being instructed in anything beyond how to predict the next word in a sequence.[17][18]

One example of generalized learning is GPT-2's ability to perform machine translation between French and English, for which task GPT-2's performance was assessed using WMT-14 translation tasks. GPT-2's training corpus included virtually no French text; non-English text was deliberately removed while cleaning the dataset prior to training, and as a consequence, only 10MB of French of the remaining 40,000MB was available for the model to learn from (mostly from foreign-language quotations in English posts and articles).[2]

Despite this, GPT-2 achieved 5 BLEU on the WMT-14 English-to-French test set (slightly below the score of a translation via word-for-word substitution). It was also able to outperform several contemporary (2017) unsupervised machine translation baselines on the French-to-English test set, where GPT-2 achieved 11.5 BLEU. This remained below the highest-performing contemporary unsupervised approach (2019), which had achieved 33.5 BLEU.[2] However, other models used large amounts of French text to achieve these results; GPT-2 was estimated to have used a monolingual French corpus approximately 1/500 the size of comparable approaches.[2]

architecture parameter count training data
GPT-1 12-level, 12-headed Transformer decoder (no encoder), followed by linear-softmax. 0.12 billion BookCorpus:[38] 4.5 GB of text, from 7000 unpublished books of various genres.
GPT-2 GPT-1, but with modified normalization 1.5 billion WebText: 40 GB[39] of text, 8 million documents, from 45 million webpages upvoted on Reddit.
GPT-3 GPT-2, but with modification to allow larger scaling. 175 billion 570 GB plaintext, 300 billion tokens of CommonCrawl, WebText, English Wikipedia, and two books corpora (Books1 and Books2).

GPT-2 was to be followed by the 175-billion-parameter GPT-3,[40] revealed to the public in 2020[41] (whose source code has never been made available). Access to GPT-3 is provided exclusively through APIs offered by OpenAI and Microsoft.[42] That was then later followed by GPT-4.

References

  1. ^ "gpt-2". GitHub. Archived from the original on 11 March 2023. Retrieved 13 March 2023.
  2. ^ a b c d e f g Radford, Alec; Wu, Jeffrey; Child, Rewon; Luan, David; Amodei, Dario; Sutskever, Ilua (14 February 2019). "Language models are unsupervised multitask learners" (PDF). OpenAI. 1 (8). Archived (PDF) from the original on 6 February 2021. Retrieved 19 December 2020.
  3. ^ a b Vincent, James (7 November 2019). "OpenAI has published the text-generating AI it said was too dangerous to share". The Verge. Archived from the original on 11 June 2020. Retrieved 19 December 2020.
  4. ^ a b "GPT-2: 1.5B Release". OpenAI. 2019-11-05. Archived from the original on 2019-11-14. Retrieved 2019-11-14.
  5. ^ a b "Better Language Models and Their Implications". OpenAI. 14 February 2019. Archived from the original on 19 December 2020. Retrieved 19 December 2020.
  6. ^ a b c Radford, Alec; Narasimhan, Karthik; Salimans, Tim; Sutskever, Ilya (11 June 2018). "Improving Language Understanding by Generative Pre-Training" (PDF). OpenAI. p. 12. Archived (PDF) from the original on 26 January 2021. Retrieved 23 January 2021.
  7. ^ a b Hegde, Chaitra; Patil, Shrikumar (9 June 2020). "Unsupervised Paraphrase Generation using Pre-trained Language Models". arXiv:2006.05477 [cs.CL].
  8. ^ a b c d e Hern, Alex (14 February 2019). "New AI fake text generator may be too dangerous to release, say creators". The Guardian. Archived from the original on 14 February 2019. Retrieved 19 December 2020.
  9. ^ Vaswani, Ashish; Shazeer, Noam; Parmar, Niki; Uszkoreit, Jakob; Jones, Llion; Gomez, Aidan N; Kaiser, Łukasz; Polosukhin, Illia (2017). "Attention is All you Need" (PDF). Advances in Neural Information Processing Systems. 30. Curran Associates, Inc.
  10. ^ Olah, Chris; Carter, Shan (8 September 2016). "Attention and Augmented Recurrent Neural Networks". Distill. 1 (9). doi:10.23915/distill.00001. Archived from the original on 22 December 2020. Retrieved 22 January 2021.
  11. ^ Bahdanau, Dzmitry; Cho, Kyunghyun; Bengio, Yoshua (1 September 2014). "Neural Machine Translation by Jointly Learning to Align and Translate". arXiv:1409.0473 [cs.CL].
  12. ^ Luong, Minh-Thang; Pham, Hieu; Manning, Christopher D. (17 August 2015). "Effective Approaches to Attention-based Neural Machine Translation". arXiv:1508.04025 [cs.CL].
  13. ^ a b Trinh, Trieu H.; Le, Quoc V. (7 Jun 2018). "A Simple Method for Commonsense Reasoning". arXiv:1806.02847 [cs.CL].
  14. ^ a b Quach, Katyanna (14 February 2019). "Roses are red, this is sublime: We fed OpenAI's latest chat bot a classic Reg headline". The Register. Archived from the original on 9 March 2021. Retrieved 27 February 2021.
  15. ^ a b "The Staggering Cost of Training SOTA AI Models". Synced. 27 June 2019. Archived from the original on 24 November 2020. Retrieved 27 February 2021.
  16. ^ Wiggers, Kyle (23 March 2020). "Google open-sources framework that reduces AI training costs by up to 80%". VentureBeat. Archived from the original on 26 November 2020. Retrieved 27 February 2021.
  17. ^ a b c d e f Vincent, James (14 February 2019). "OpenAI's new multitalented AI writes, translates, and slanders". The Verge. Archived from the original on 18 December 2020. Retrieved 19 December 2020.
  18. ^ a b c Piper, Kelsey (14 February 2019). "An AI helped us write this article". Vox. Archived from the original on 8 November 2020. Retrieved 19 December 2020.
  19. ^ Köbis, Nils; Mossink, Luca D. (1 January 2021). "Artificial intelligence versus Maya Angelou: Experimental evidence that people cannot differentiate AI-generated from human-written poetry". Computers in Human Behavior. 114: 106553. doi:10.1016/j.chb.2020.106553. hdl:21.11116/0000-0007-13E5-1.
  20. ^ Schwartz, Oscar (4 July 2019). "Could 'fake text' be the next global political threat?". The Guardian. Archived from the original on 16 July 2019. Retrieved 16 July 2019.
  21. ^ a b Vincent, James (21 February 2019). "AI researchers debate the ethics of sharing potentially harmful programs". The Verge. Archived from the original on 9 February 2021. Retrieved 27 February 2021.
  22. ^ a b Zhang, Hugh (19 February 2019). "OpenAI: Please Open Source Your Language Model". The Gradient. Archived from the original on 28 January 2021. Retrieved 28 February 2021.
  23. ^ Gokaslan, Aaron; Cohen, Vanya; Pavlick, Ellie; Tellex, Stefanie (22 August 2019). "OpenGPT-2: We Replicated GPT-2 Because You Can Too". Noteworthy. Archived from the original on 29 April 2023. Retrieved 27 February 2021.
  24. ^ Johnson, Khari (20 August 2019). "OpenAI releases curtailed version of GPT-2 language model". VentureBeat. Archived from the original on 18 December 2020. Retrieved 19 December 2020.
  25. ^ a b Vincent, James (6 June 2019). "There's a subreddit populated entirely by AI personifications of other subreddits". The Verge. Archived from the original on 21 February 2021. Retrieved 27 February 2021.
  26. ^ Murati, Ermira (2022-04-13). "Language & Coding Creativity | American Academy of Arts and Sciences". www.amacad.org. Retrieved 2024-03-18.
  27. ^ "GPT-2 Small".
  28. ^ GPT-2 Medium. "Openai-community/Gpt2-medium · Hugging Face".{{cite web}}: CS1 maint: numeric names: authors list (link)
  29. ^ Vincent, James (24 July 2019). "This AI-powered autocompletion software is Gmail's Smart Compose for coders". The Verge. Archived from the original on 9 March 2021. Retrieved 27 February 2021.
  30. ^ Olson, Mathew (17 December 2019). "AI Dungeon 2, the Text Adventure Where You Can do Nearly Anything, Is Now on Mobile". Archived from the original on 20 September 2020. Retrieved 27 February 2021.
  31. ^ Nelius, Joanna (3 August 2020). "This AI-Powered Choose-Your-Own-Adventure Text Game Is Super Fun and Makes No Sense". Gizmodo. Archived from the original on 28 February 2021. Retrieved 27 February 2021.
  32. ^ Ha, Anthony (4 February 2021). "AI Dungeon-maker Latitude raises $3.3M to build games with 'infinite' story possibilities". TechCrunch. Archived from the original on 21 February 2021. Retrieved 27 February 2021.
  33. ^ "Write With Transformer". Archived from the original on December 4, 2019. Retrieved December 4, 2019.
  34. ^ "Talk to Transformer". Archived from the original on December 4, 2019. Retrieved December 4, 2019.
  35. ^ "CreativeEngines". Archived from the original on February 3, 2023. Retrieved June 25, 2021.
  36. ^ Ohlheiser, Abby; Hao, Karen (26 February 2021). "An AI is training counselors to deal with teens in crisis". MIT Technology Review. Archived from the original on 27 February 2021. Retrieved 27 February 2021.
  37. ^ "Language models can explain neurons in language models". OpenAI. Retrieved 13 May 2023.
  38. ^ Zhu, Yukun; Kiros, Ryan; Zemel, Rich; Salakhutdinov, Ruslan; Urtasun, Raquel; Torralba, Antonio; Fidler, Sanja (2015). "Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books". International Conference on Computer Vision 2015: 19–27. arXiv:1506.06724. Archived from the original on 2023-02-05. Retrieved 2023-02-05.
  39. ^ Murati, Ermira (2022-04-13). "Language & Coding Creativity | American Academy of Arts and Sciences". www.amacad.org. Retrieved 2024-03-18.
  40. ^ Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini; Herbert-Voss, Ariel; Krueger, Gretchen; Henighan, Tom; Child, Rewon; Ramesh, Aditya; Ziegler, Daniel M.; Wu, Jeffrey; Winter, Clemens; Hesse, Christopher; Chen, Mark; Sigler, Eric; Litwin, Mateusz; Gray, Scott; Chess, Benjamin; Clark, Jack; Berner, Christopher; McCandlish, Sam; Radford, Alec; Sutskever, Ilya; Amodei, Dario (July 22, 2020). "Language Models are Few-Shot Learners". arXiv:2005.14165 [cs.CL].
  41. ^ Arram (July 9, 2020). "GPT-3: An AI that's eerily good at writing almost anything". Arram Sabeti. Archived from the original on July 20, 2020. Retrieved July 31, 2020.
  42. ^ Hao, Karen (September 23, 2020). "OpenAI is giving Microsoft exclusive access to its GPT-3 language model". MIT Technology Review. Archived from the original on 2021-02-05. Retrieved 2020-09-25. The companies say OpenAI will continue to offer its public-facing API, which allows chosen users to send text to GPT-3 or OpenAI's other models and receive its output. Only Microsoft, however, will have access to GPT-3's underlying code, allowing it to embed, repurpose, and modify the model as it pleases.

Read other articles:

Sculpture by George Blackall Simonds 51°27′24″N 0°58′13″W / 51.45656°N 0.97031°W / 51.45656; -0.97031 The statue in 2015 The statue c. 1888, with a view down Friar Street in the background The statue of Queen Victoria stands at the western end of Friar Street outside the Town Hall of Reading, Berkshire, in southern England.[1] The statue Queen Victoria (1819–1901) was the queen of many realms in the British Empire, and Empress of India. She is widely…

American college football rivalry Alabama–Penn State football rivalry Alabama Crimson Tide Penn State Nittany Lions First meetingDecember 19, 1959Penn State, 7–0Latest meetingSeptember 10, 2011Alabama, 27–11Next meetingTBDStatisticsMeetings total15All-time seriesAlabama leads, 10–5[1]Largest victoryAlabama, 42–21 (1982)Penn State, 23–3 (1986)Longest win streakAlabama, 4 (1975–1982)Current win streakAlabama, 2 (2010–present) 800km500miles Penn State Alabama  Locations …

1966 film by B. R. Panthulu Emme ThammannaVCD coverDirected byB. R. PanthuluScreenplay byPadmini Pictures Sahithya VibhagaStory byA. K. VelanProduced byB. R. PanthuluStarringRajkumarDikki Madhava RaoBharathiCinematographyP. L. NagappaEdited byR. DevarajanMusic byT. G. LingappaProductioncompanyPadmini PicturesRelease date 1966 (1966) Running time146 minutesCountryIndiaLanguageKannada Emme Thammanna is a 1966 Indian Kannada-language film produced and directed by B. R. Panthulu. It stars Rajku…

此條目可参照英語維基百科相應條目来扩充。 (2021年5月6日)若您熟悉来源语言和主题,请协助参考外语维基百科扩充条目。请勿直接提交机械翻译,也不要翻译不可靠、低品质内容。依版权协议,译文需在编辑摘要注明来源,或于讨论页顶部标记{{Translated page}}标签。 约翰斯顿环礁Kalama Atoll 美國本土外小島嶼 Johnston Atoll 旗幟颂歌:《星條旗》The Star-Spangled Banner約翰斯頓環礁地…

 烏克蘭總理Прем'єр-міністр України烏克蘭國徽現任杰尼斯·什米加尔自2020年3月4日任命者烏克蘭總統任期總統任命首任維托爾德·福金设立1991年11月后继职位無网站www.kmu.gov.ua/control/en/(英文) 乌克兰 乌克兰政府与政治系列条目 宪法 政府 总统 弗拉基米尔·泽连斯基 總統辦公室 国家安全与国防事务委员会 总统代表(英语:Representatives of the President of Ukraine) 总理…

1963 British film by Montgomery Tully Clash by NightDirected byMontgomery TullyScreenplay byMontgomery TullyMaurice J. WilsonBased onClash by Night1962 novelby Rupert Croft-CookeProduced byMaurice J. WilsonStarringTerence LongdonJennifer JayneHarry FowlerPeter SallisCinematographyGeoffrey FaithfullEdited byMaurice RootesMusic byJohn VealeProductioncompanyEternal FilmsDistributed byGrand National PicturesRelease date 1963 (1963) Running time74 minutesCountryUnited KingdomLanguageEnglish For …

American evolutionary biologist (born 1968) Joseph HenrichHenrich in 2016Born1968 (age 55–56)NationalityAmericanEducationUniversity of Notre DameUniversity of California at Los AngelesAwardsPECASEScientific careerFieldsAnthropologyInstitutionsEmory UniversityUniversity of British ColumbiaHarvard University Websitehenrich.fas.harvard.edu Joseph Henrich (born 1968) is an American professor of human evolutionary biology at Harvard University.[1] Before arriving at Harvard, Henric…

BallsAlbum studio karya Elizabeth CookDirilis1 Mei 2007Direkam2007GenreCountryLabelEmergentKronologi Elizabeth Cook This Side Of The Moon(2005)This Side Of The Moon2005 Balls(2007) Balls adalah album ketiga penyanyi country, Elizabeth Cook. Album ini dirilis pada tanggal 1 Mei 2007. Daftar lagu Semua lagu ditulis oleh Elizabeth Cook kecuali yang diberi catatan. Times Are Tough in Rock 'N Roll - 2:05 Don't Go Borrowing Trouble - 2:46 Sometimes It Takes Balls to Be a Woman (Cook, Melinda Schne…

French politician André Mallarmé André Mallarmé (6 August 1877 – 8 April 1956) was a French politician. Mallarmé was born in Bouzaréah, Algeria. He represented the Republican-Socialist Party from 1924 to 1928 and the Independent Radicals from 1928 to 1939 in the Chamber of Deputies. He was Senator from 1939 to 1940. He was Minister of Posts, Telegraphs and Telephones in 1930 and 1934 and Minister of National Education from 1934 to 1935. On 10 July 1940, Mallarmé voted in favour of grant…

FunctionSounding rocketTest vehicleASAT boosterCountry of originUnited StatesLaunch historyLaunch sitesCCAFS LC-17Johnston Atoll LE-1 & LE-2Total launches34Success(es)28Failure(s)6[edit on Wikidata] The Thor DSV-2 was a series of sounding rockets, test vehicles, and anti-satellite weapons derived from the Thor Intermediate-range ballistic missile. It was also used as the first stage of several Thor-derived expendable launch systems.[1] Variants Thor DSV-2C Thor DSV-2D Thor DSV-2D…

School of social theory and critical philosophy Part of a series on theFrankfurt School Major works Dialectic of Enlightenment Eclipse of Reason Eros and Civilization Escape from Freedom Minima Moralia Negative Dialectics One-Dimensional Man Reason and Revolution The Structural Transformation of the Public Sphere The Theory of Communicative Action The Work of Art in the Age of Mechanical Reproduction Notable theorists Adorno Apel Benjamin Fromm Forst Grünberg Geuss Habermas Honneth Horkheimer J…

  لمعانٍ أخرى، طالع مستشفى الملك خالد (توضيح). مستشفى الملك خالد إحداثيات 17°32′39″N 44°13′59″E / 17.544246°N 44.23292°E / 17.544246; 44.23292 [1]  معلومات عامة نوع المبنى حكومي الموقع نجران القرية أو المدينة نجران الدولة المملكة العربية السعودية الاسم نسبة إلى خالد بن عبد ال…

British magazine established in 1901 For the 18th-century journal, see The Tatler (1709 journal). For other uses, see Tatler (disambiguation). TatlerDecember 2019 cover featuring Meghan, Duchess of Sussex, and highlighting 310 years of TatlerEditorRichard DennenCategoriesFashionFrequencyMonthlyTotal circulation(2019)79,000[1]FounderClement ShorterFirst issue1901; 123 years ago (1901)CompanyCondé NastCountryUnited KingdomLanguageEnglishWebsiteTatler.com Tatler (stylized…

Million MarketIndustriMusikDidirikan25 Agustus 2015Kantorpusat Seoul, Korea SelatanIndukSM EntertainmentSitus webwww.million-market.com Million Market (밀리언마켓) adalah label rekaman dan perusahaan hiburan Korea Selatan yang didirikan pada 25 Agustus 2015. Pada 24 Oktober 2018, Million Market mengumumkan bahwa SM Entertainment menjadi bagian terbesar perusahaan yang lebih tua dan dengan demikian menjadi sub-label.[1] Anak Perusahaan ATMseoul OFF THE Rhythment Artis Duo 2F Solo MC …

  ميّز عن سيارة إسعاف. إسعاف أوليرمز الإسعافات الأولية العالميفرع من إجراء طبيstabilization (en) emergency treatment (en) النوع تخصص جامعي[1] تعديل - تعديل مصدري - تعديل ويكي بيانات خزانة إسعافات أولية الإسعافات الأوَّلية هي عناية طبية فورية ومؤقتة؛ تقدم لإنسان أو حيوان مصاب أو مريض.[…

هذه المقالة بحاجة لصندوق معلومات. فضلًا ساعد في تحسين هذه المقالة بإضافة صندوق معلومات مخصص إليها. أعضاء في منظمة شباب هتلر يقومون بتشغيل جهاز رصد الأجسام بالصدى. شباب هتلر أثناء عملهم كطاقم على جهاز كشاف مضاد للطائرات في برلين (1943). مساعد قوة جوية (المعروف أيضًا باسم فلاشيلف…

Pour les articles homonymes, voir Peter. Cet article est une ébauche concernant une personnalité américaine. Vous pouvez partager vos connaissances en l’améliorant (comment ?) selon les recommandations des projets correspondants. Laurence J. PeterBiographieNaissance 16 septembre 1919VancouverDécès 12 janvier 1990 (à 70 ans)Palos Verdes EstatesNationalité canadienneFormation Université Western WashingtonActivités Psychologue, professeur d'université, pédagogueAutres inform…

American executive Aneesh Chopra1st Chief Technology Officer of the United StatesIn officeMay 2009 – February 2012PresidentBarack ObamaPreceded byPosition establishedSucceeded byTodd Park4th Virginia Secretary of TechnologyIn officeJanuary 14, 2006 – April 2009GovernorTim KainePreceded byEugene HuangSucceeded byLeonard Pomata Personal detailsBorn (1972-07-13) July 13, 1972 (age 51)Trenton, New Jersey, U.S.Political partyDemocraticSpouseRohini DhirEducationJohns Hopkins…

Lake in Shutesbury, Massachusetts, US Lake WyolaLake WyolaShow map of MassachusettsLake WyolaShow map of the United StatesLocationShutesbury, MassachusettsCoordinates42°30′06″N 72°26′12″W / 42.50167°N 72.43667°W / 42.50167; -72.43667TypeLake then reservoirPrimary inflowsFiske BrookPrimary outflowsSawmill RiverCatchment area6.4 sq mi (16.6 km2)Basin countriesUnited StatesSurface area128 acres (0.5 km2)Average depth11 ft (3.4 m…

Hawaiian Christmas tree postcard Christmas in Hawaii is a major annual celebration, as in most of the Western world. History This festival was introduced to Hawaii with the arrival of Protestant missionaries, and is believed to have started after 1820.[1][2] Most of the traditions they currently celebrate come from the missionaries.[3][4] Before the Hawaiians celebrated the Christmas people know today, they had a festival named Makahiki which lasted around four mo…