{"id":1988,"date":"2026-03-08T07:53:19","date_gmt":"2026-03-08T07:53:19","guid":{"rendered":"https:\/\/owspakistan.com\/?p=1988"},"modified":"2026-03-08T07:53:19","modified_gmt":"2026-03-08T07:53:19","slug":"a-roadmap-for-ai-if-anyone-will-listen","status":"publish","type":"post","link":"https:\/\/owspakistan.com\/?p=1988","title":{"rendered":"A roadmap for AI, if anyone will listen"},"content":{"rendered":"<div>\n<p id=\"speakable-summary\" class=\"wp-block-paragraph\">While Washington\u2019s breakup with Anthropic exposed the complete lack of any coherent rules governing artificial intelligence, a bipartisan coalition of thinkers has assembled something the government has so far declined to produce: a framework for what responsible AI development should actually look like.<\/p>\n<p class=\"wp-block-paragraph\">The <a rel=\"nofollow noopener\" href=\"https:\/\/humanstatement.org\/\" target=\"_blank\">Pro-Human Declaration<\/a> was finalized before last week\u2019s Pentagon-Anthropic standoff, but the collision of the two events wasn\u2019t lost on anyone involved.<\/p>\n<p class=\"wp-block-paragraph\">\u201cThere\u2019s something quite remarkable that has happened in America just in the last four months,\u201d said Max Tegmark, the MIT physicist and AI researcher who helped organize the effort, <a rel=\"nofollow noopener\" href=\"https:\/\/podcasts.apple.com\/us\/podcast\/the-ai-safety-showdown-max-tegmark-on-government\/id1498270180?i=1000753066787\" target=\"_blank\">in conversation<\/a> with this editor. \u201cPolling suddenly [is showing] that 95% of all Americans oppose an unregulated race to superintelligence.\u201d<\/p>\n<p class=\"wp-block-paragraph\">The newly published document, signed by hundreds of experts, former officials, and public figures, opens with the no-nonsense observation that humanity is at a fork in the road. One path, which the declaration calls \u201cthe race to replace,\u201d leads to humans being supplanted first as workers, then as decision-makers, as power accrues to unaccountable institutions and their machines. The other leads to AI that massively expands human potential.<\/p>\n<p class=\"wp-block-paragraph\">The latter scenario depends on five key pillars: keeping humans in charge, avoiding the concentration of power, protecting the human experience, preserving individual liberty, and holding AI companies legally accountable. Among its more muscular provisions is an outright prohibition on superintelligence development until there\u2019s scientific consensus it can be done safely and genuine democratic buy-in; mandatory off-switches on powerful systems; and a ban on architectures that are capable of self-replication, autonomous self-improvement, or resistance to shutdown.<\/p>\n<p class=\"wp-block-paragraph\">The declaration\u2019s release coincides with a period that makes its urgency far easier to appreciate. On the last Friday in February, Defense Secretary Pete Hegseth designated Anthropic \u2014 whose AI already runs on classified military platforms \u2014 a \u201csupply chain risk\u201d after the company refused to grant the Pentagon unlimited use of its technology, a label ordinarily reserved for firms with ties to China. Hours later, OpenAI cut its own deal with the Defense Department, one that legal experts say will be difficult to enforce in any meaningful way. What it all laid bare is how costly Congressional inaction on AI has become.<\/p>\n<p class=\"wp-block-paragraph\">As Dean Ball, a senior fellow at the Foundation for American Innovation, <a rel=\"nofollow noopener\" href=\"https:\/\/www.nytimes.com\/2026\/03\/07\/technology\/anthropic-openai-pentagon-dario-amodei-sam-altman.html\" target=\"_blank\">told The New York Times<\/a> afterward, \u201cThis is not just some dispute over a contract. This is the first conversation we have had as a country about control over AI systems.\u201d<\/p>\n<div class=\"wp-block-techcrunch-inline-cta\">\n<div class=\"inline-cta__wrapper\">\n<p>Techcrunch event<\/p>\n<div class=\"inline-cta__content\">\n<p>\n\t\t\t\t\t\t\t\t\t<span class=\"inline-cta__location\">San Francisco, CA<\/span><br \/>\n\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"inline-cta__separator\">|<\/span><br \/>\n\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"inline-cta__date\">October 13-15, 2026<\/span>\n\t\t\t\t\t\t\t<\/p>\n<\/p><\/div><\/div>\n<\/div>\n<p class=\"wp-block-paragraph\">Tegmark reached for an analogy that most people can understand when we spoke. \u201cYou never have to worry that some drug company is going to release some other drug that causes massive harm before people have figured out how to make it safe,\u201d he said, \u201cbecause the FDA won\u2019t allow them to release anything until it\u2019s safe enough.\u201d<\/p>\n<p class=\"wp-block-paragraph\">Washington turf wars rarely generate the kind of public pressure that changes laws. Instead, Tegmark sees child safety as the pressure point most likely to crack the current impasse. Indeed, the declaration calls for mandatory pre-deployment testing of AI products \u2014 particularly chatbots and companion apps aimed at younger users \u2014 covering risks including increased suicidal ideation, exacerbation of mental health conditions, and emotional manipulation.<\/p>\n<p class=\"wp-block-paragraph\">\u201cIf some creepy old man is texting an 11-year-old pretending to be a young girl and trying to persuade this boy to commit suicide, the guy can go to jail for that,\u201d Tegmark said. \u201cWe already have laws. It\u2019s illegal. So why is it different if a machine does it?\u201d<\/p>\n<p class=\"wp-block-paragraph\">He believes that once the principle of pre-release testing is established for children\u2019s products, the scope will widen almost inevitably. \u201cPeople will come along and be like \u2014 let\u2019s add a few other requirements. Maybe we should also test that this can\u2019t help terrorists make bioweapons. Maybe we should test to make sure that superintelligence doesn\u2019t have the ability to overthrow the U.S. government.\u201d<\/p>\n<p class=\"wp-block-paragraph\">It is no small thing that former Trump advisor Steve Bannon and Susan Rice, President Obama\u2019s National Security Advisor, have signed the same document \u2014 along with former Joint Chiefs Chairman Mike Mullen and progressive faith leaders.<\/p>\n<p class=\"wp-block-paragraph\">\u201cWhat they agree on, of course, is that they\u2019re all human,\u201d says Tegmark. \u201cIf it\u2019s going to come down to whether we want a future for humans or a future for machines, of course they\u2019re going to be on the same side.\u201d<\/p>\n<\/div>\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>While Washington\u2019s breakup with Anthropic exposed the complete lack of any coherent rules governing artificial intelligence, a bipartisan coalition of thinkers has assembled something the government has so far declined to produce: a framework for what responsible AI development should actually look like. The Pro-Human Declaration was finalized before last week\u2019s Pentagon-Anthropic standoff, but the<\/p>\n","protected":false},"author":1,"featured_media":1989,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[45,47,284,524,41,525,427,426,289,51,109,526],"tags":[519,520,521,522,523],"class_list":["post-1988","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai","category-ai-coding-assistant","category-ai-regulation","category-anthropic","category-artificial-intelligence","category-max-tegmark","category-openai","category-pentagon","category-pro-ai-pacs","category-tech","category-technology","category-the-pro-human-declaration","tag-anthropic","tag-max-tegmark","tag-openai","tag-pentagon","tag-the-pro-human-declaration"],"jetpack_publicize_connections":[],"_links":{"self":[{"href":"https:\/\/owspakistan.com\/index.php?rest_route=\/wp\/v2\/posts\/1988","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/owspakistan.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/owspakistan.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/owspakistan.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/owspakistan.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1988"}],"version-history":[{"count":0,"href":"https:\/\/owspakistan.com\/index.php?rest_route=\/wp\/v2\/posts\/1988\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/owspakistan.com\/index.php?rest_route=\/wp\/v2\/media\/1989"}],"wp:attachment":[{"href":"https:\/\/owspakistan.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1988"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/owspakistan.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1988"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/owspakistan.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1988"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}