{"id":10303,"date":"2022-07-19T19:37:34","date_gmt":"2022-07-20T00:37:34","guid":{"rendered":"http:\/\/blog.jlbn.net\/?p=10303"},"modified":"2022-07-19T19:37:34","modified_gmt":"2022-07-20T00:37:34","slug":"as-ai-language-skills-grow-so-do-scientists-concerns","status":"publish","type":"post","link":"http:\/\/blog.jlbn.net\/?p=10303","title":{"rendered":"As AI language skills grow, so do scientists\u2019 concerns"},"content":{"rendered":"\n<p>The tech industry\u2019s latest artificial intelligence constructs can be pretty convincing if you ask them what it feels like to be a sentient computer, or maybe just a dinosaur or squirrel. But they\u2019re not so good \u2014 and sometimes dangerously bad \u2014 at handling other seemingly straightforward tasks.<\/p>\n\n\n\n<p>Take, for instance, GPT-3, a Microsoft-controlled system that can generate paragraphs of human-like text based on what it\u2019s learned from a vast database of digital books and online writings. It\u2019s considered one of the most advanced of a new generation of AI algorithms that can converse, generate readable text on demand and even produce novel images and video.<\/p>\n\n\n\n<p>Among other things, GPT-3 can write up most any text you ask for \u2014 a cover letter for a zookeeping job, say, or a Shakespearean-style sonnet set on Mars. But when Pomona College professor Gary Smith asked it a simple but nonsensical question about walking upstairs, GPT-3 muffed it.<\/p>\n\n\n\n<p>\u201cYes, it is safe to walk upstairs on your hands if you wash them first,\u201d the AI replied.<\/p>\n\n\n\n<p>These powerful and power-chugging AI systems, technically known as \u201clarge language models\u201d because they\u2019ve been trained on a huge body of text and other media, are already getting baked into customer service chatbots, Google searches and \u201cauto-complete\u201d email features that finish your sentences for you. But most of the tech companies that built them have been secretive about their inner workings, making it hard for outsiders to understand the flaws that can make them a source of misinformation, racism and other harms.<\/p>\n\n\n\n<p>\u201cThey\u2019re very good at writing text with the proficiency of human beings,\u201d said Teven Le Scao, a research engineer at the AI startup Hugging Face. \u201cSomething they\u2019re not very good at is being factual. It looks very coherent. It\u2019s almost true. But it\u2019s often wrong.\u201d<\/p>\n\n\n\n<p>That\u2019s one reason a coalition of AI researchers co-led by Le Scao \u2014- with help from the French government \u2014 launched a new large language model Tuesday that\u2019s supposed to serve as an antidote to closed systems such as GPT-3. The group is called BigScience and their model is BLOOM, for the BigScience Large Open-science Open-access Multilingual Language Model. Its main breakthrough is that it works across 46 languages, including Arabic, Spanish and French \u2014 unlike most systems that are focused on English or Chinese.<\/p>\n\n\n\n<p>It\u2019s not just Le Scao\u2019s group aiming to open up the black box of AI language models. Big Tech company Meta, the parent of Facebook and Instagram, is also calling for a more open approach as it tries to catch up to the systems built by Google and OpenAI, the company that runs GPT-3.<\/p>\n\n\n\n<p>\u201cWe\u2019ve seen announcement after announcement after announcement of people doing this kind of work, but with very little transparency, very little ability for people to really look under the hood and peek into how these models work,\u201d said Joelle Pineau, managing director of Meta AI.<\/p>\n\n\n\n<p>Competitive pressure to build the most eloquent or informative system \u2014 and profit from its applications \u2014 is one of the reasons that most tech companies keep a tight lid on them and don\u2019t collaborate on community norms, said Percy Liang, an associate computer science professor at Stanford who directs its Center for Research on Foundation Models.<\/p>\n\n\n\n<p>\u201cFor some companies this is their secret sauce,\u201d Liang said. But they are often also worried that losing control could lead to irresponsible uses. As AI systems are increasingly able to write health advice websites, high school term papers or political screeds, misinformation can proliferate and it will get harder to know what\u2019s coming from a human or a computer.<\/p>\n\n\n\n<p>Meta recently launched a new language model called OPT-175B that uses publicly available data \u2014 from heated commentary on Reddit forums to the archive of U.S. patent records and a trove of emails from the Enron corporate scandal. Meta says its openness about the data, code and research logbooks makes it easier for outside researchers to help identify and mitigate the bias and toxicity that it picks up by ingesting how real people write and communicate.<\/p>\n\n\n\n<p>\u201cIt is hard to do this. We are opening ourselves for huge criticism. We know the model will say things we won\u2019t be proud of,\u201d Pineau said.<\/p>\n\n\n\n<p>While most companies have set their own internal AI safeguards, Liang said what\u2019s needed are broader community standards to guide research and decisions such as when to release a new model into the wild.<\/p>\n\n\n\n<p>It doesn\u2019t help that these models require so much computing power that only giant corporations and governments can afford them. BigScience, for instance, was able to train its models because it was offered access to France\u2019s powerful Jean Zay supercomputer near Paris.<\/p>\n\n\n\n<p>The trend for ever-bigger, ever-smarter AI language models that could be \u201cpre-trained\u201d on a wide body of writings took a big leap in 2018 when Google introduced a system known as BERT that uses a so-called \u201ctransformer\u201d technique that compares words across a sentence to predict meaning and context. But what really impressed the AI world was GPT-3, released by San Francisco-based startup OpenAI in 2020 and soon after exclusively licensed by Microsoft.<\/p>\n\n\n\n<p>GPT-3 led to a boom in creative experimentation as AI researchers with paid access used it as a sandbox to gauge its performance \u2014 though without important information about the data it was trained on.<\/p>\n\n\n\n<p>OpenAI has broadly described its training sources in a research paper, and has also publicly reported its efforts to grapple with potential abuses of the technology. But BigScience co-leader Thomas Wolf said it doesn\u2019t provide details about how it filters that data, or give access to the processed version to outside researchers.<\/p>\n\n\n\n<p>\u201cSo we can\u2019t actually examine the data that went into the GPT-3 training,\u201d said Wolf, who is also a chief science officer at Hugging Face. \u201cThe core of this recent wave of AI tech is much more in the dataset than the models. The most important ingredient is data and OpenAI is very, very secretive about the data they use.\u201d<\/p>\n\n\n\n<p>Wolf said that opening up the datasets used for language models helps humans better understand their biases. A multilingual model trained in Arabic is far less likely to spit out offensive remarks or misunderstandings about Islam than one that\u2019s only trained on English-language text in the U.S., he said.<\/p>\n\n\n\n<p>One of the newest AI experimental models on the scene is Google\u2019s LaMDA, which also incorporates speech and is so impressive at responding to conversational questions that one Google engineer argued it was approaching consciousness \u2014 a claim that got him suspended from his job last month.<\/p>\n\n\n\n<p>Colorado-based researcher Janelle Shane, author of the AI Weirdness blog, has spent the past few years creatively testing these models, especially GPT-3 \u2014 often to humorous effect. But to point out the absurdity of thinking these systems are self-aware, she recently instructed it to be an advanced AI but one which is secretly a Tyrannosaurus rex or a squirrel.<\/p>\n\n\n\n<p>\u201cIt is very exciting being a squirrel. I get to run and jump and play all day. I also get to eat a lot of food, which is great,\u201d GPT-3 said, after Shane asked it for a transcript of an interview and posed some questions.<\/p>\n\n\n\n<p>Shane has learned more about its strengths, such as its ease at summarizing what\u2019s been said around the internet about a topic, and its weaknesses, including its lack of reasoning skills, the difficulty of sticking with an idea across multiple sentences and a propensity for being offensive.<\/p>\n\n\n\n<p>\u201cI wouldn\u2019t want a text model dispensing medical advice or acting as a companion,\u201d she said. \u201cIt\u2019s good at that surface appearance of meaning if you are not reading closely. It\u2019s like listening to a lecture as you\u2019re falling asleep.\u201d<\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The tech industry\u2019s latest artificial intelligence constructs can be pretty convincing if you ask them what it feels like to<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[254,3421],"tags":[3552,888],"_links":{"self":[{"href":"http:\/\/blog.jlbn.net\/index.php?rest_route=\/wp\/v2\/posts\/10303"}],"collection":[{"href":"http:\/\/blog.jlbn.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/blog.jlbn.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/blog.jlbn.net\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/blog.jlbn.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=10303"}],"version-history":[{"count":1,"href":"http:\/\/blog.jlbn.net\/index.php?rest_route=\/wp\/v2\/posts\/10303\/revisions"}],"predecessor-version":[{"id":10304,"href":"http:\/\/blog.jlbn.net\/index.php?rest_route=\/wp\/v2\/posts\/10303\/revisions\/10304"}],"wp:attachment":[{"href":"http:\/\/blog.jlbn.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=10303"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/blog.jlbn.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=10303"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/blog.jlbn.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=10303"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}