AGI dominance: Trillion $ prize

Mx Hub
5 min readJun 5, 2024

--

What if you can provide everyone with a personal superintelligent #AI tutor, taking care of your health / wealth / games outcomes?

To help a person understand risks / opportunities and formulate their life challenge at this stage of life.

This company immediately would become Trillion $ company.

I predicted this in https://metaall.medium.com/one-ai-to-rule-top-emerging-tech-428d0581051f

For instance, in health area, there is clear demand for instant diagnostics, like CheckApp AI tool that predicts risks for your health. This market is fast growing https://www.perplexity.ai/page/AI-Transforming-Healthcare-FUcgK0xoTnux7Ei.ZuigvQ

In Health AI market, I have identified top-3 leaders:

Verseon.com

Gero, company focused on anti-aging https://x.com/fedichev/status/1797808523625288124

And CheckApp, a stealth startup. To learn details, you should get to Strong Ai Summit. The best AI projects would be showcased there https://saisum.world/

Second main market is security.

Dubai has became the safest city by placing cameras everywhere. Imaging such a global Web of Trust, billions of cameras and data points aimed at safety.

But an unleashed AGI can destroy the mankind.

Last Lex Fridman conversation was on this topic, with Roman Yampolskiy

https://twitter.com/lexfridman/status/1797383905034514684

Roman is AI safety researcher who believes that the chance of AGI eventually destroying human civilization is 99.99%.

It is because of the AGI dominance race between The USA and China, topic raised this week by former #OpenAI Superalignment team Leopold Aschenbrenner.

As many other thought leaders, he predicts AGI in next 3 years

https://twitter.com/leopoldasch/status/1798068281414414350/photo/1

Why OpenAI Superalignment was dissolved?

https://www.wired.com/story/openai-superalignment-team-disbanded

The dissolution of OpenAI’s superalignment team started with Leopold Aschenbrenner and Pavel Izmailov dismissed for leaking company secrets, The Information reported last month. Another member of the team, William Saunders, left OpenAI in February, according to an internet forum post in his name.

OpenAI declined to comment on the departures of Sutskever or other members of the superalignment team, or the future of its work on long-term AI risks. Research on the risks associated with more powerful models will now be led by John Schulman.

The AGI race has begun and the winner is to take a Trillion $ price in one year.

By 2025/26 these AGIs will overtake many college graduates. By the end of the decade, they will be smarter than you or me; we will have superintellect in the true sense of the word.

National security forces, which have not been around for half a century, will be involved in the same way, and soon the Project will be launched. If we’re lucky, we’ll be in a total race with the Communist Party; if we’re unlucky, then a total war.

Now everyone is talking about AI, but few people have even the slightest idea of what is about to hit them. Leading experts are stuck in willful blindness, claiming “it’s just a prediction of the next word.”

Soon the world will wake up. But right now, there might be a few hundred people, most of them in San Francisco and artificial intelligence labs in UK, MENA and EU, who are situation-aware. Whatever strange forces of fate I find myself among them. Years ago, these people were ridiculed like crazy, but they trusted trends that allowed me to correctly predict AI’s achievements over the past few years.

See my report from Davos 2020 (it took place in the same hotel Trump was staying)

https://metaall.medium.com/emerging-tech-unicorns-strong-ai-convergence-at-davos2020-ddd1a2220ed4

Future AGI race winners: USA, China or Saudi Arabia?

Saudi Arabia has created two Ai funds this year, $140 bln.

See them at Strong AI Summit https://www.facebook.com/groups/nmeta/posts/7735826489828005/

Steve Blanc shows the supremacy of China.

The U.S. has nothing comparable in AI to China.

Optimizing profit above else led to wholesale offshoring of manufacturing and entire industries in order to lower costs. Investors shifted to making massive investments in industries with the quickest and greatest returns without long-term capital investments (e.g. social media, ecommerce, gaming) instead of in hardware, semiconductors, advanced manufacturing, transportation infrastructure, etc. The result was that by default, private equity and venture capital were the de facto decision makers of U.S. industrial policy.

Today one private capital fund is attempting to solve this problem.

Gilman Louie, the founder of In-Q-Tel, has started America’s Frontier Fund (AFF.) This new fund will invest in key critical deep technologies to help the U.S. keep pace with the Chinese onslaught of capital focused on this area. AFF plans to raise one billion dollars in “patient private capital” from both public and private sources and to be entirely focused on identifying critical technologies and strategic investing. Setting up their fund as a non-profit allows them to focus on long-term investments for the country, not just what’s expedient to maximize profits. It will ensure these investments grow into large commercial and dual-use companies focused on the national interest.

Outsiders triying to involve luxury and political leverage in AI race.

“Viva Technology” conference will put French innovators front-and-centre as attendees tackle key questions around artificial intelligence (AI), including its potential impact on upcoming elections and climate change.

Paris-based LVMH, the world’s largest luxury group, has also thrown its weight behind VivaTech as a founding partner of the event.

Its CEO Bernard Arnault — one of the world’s wealthiest individuals — draw crowds during his visit to the group’s sprawling stand, featuring new tech from prestigious brands like Louis Vuitton, Tag Heuer and Dior.

Over the past 18 months, France has attempted to build a reputation as a leader in generative AI, striving to attract new startup launches. But France has no money for AGI.

How to prosper in this situation of total shift to a new AI economy? Strong AI Summit has real use cases https://www.facebook.com/groups/nmeta/posts/7670157853061536/

By this, I invite to a discussion Leopold Aschenbrenner and

https://twitter.com/Pavel_Izmailov

https://twitter.com/ZoubinGhahrama1

https://twitter.com/gdb

Elon Musk and Yann LeCun who argue on this topic of “maximally rigorous pursuit of the truth, without regard to popularity or political correctness.” https://www.forbes.com/sites/roberthart/2024/05/28/elon-musk-is-feuding-with-ai-godfather-yann-lecun-again-heres-why/

https://twitter.com/pmarca and Techno-Optimists

Steve Blank with his paper https://www.facebook.com/groups/nmeta/posts/7735797753164212/

https://twitter.com/historyinmemes

And https://twitter.com/MiTiBennett author of this paper

https://arxiv.org/abs/2404.07227

In this latter paper, Michael Bennett demonstrates that

“Previous theoretical work showed generalisation to be a consequence of “weak” constraints implied by function, not form. Experiments demonstrated choosing weak constraints over simple forms yielded a 110–500% improvement in generalisation rate.”

Techno-optimist AGI projects will be discussed at #StrongAIsummit and Future Creators TV series, you’re invited to https://saisum.world/

--

--

Mx Hub

Mx Hub is first Sharing Ecosystem of top rated Emerging tech and Metaverse leaders.