News highlights: In March, a buzz of activity surrounded AI and its useful applications. AI applies to many of the Space-based intelligence systems that are driving the emerging Space industry forward. As 1,200+ science founders created a petition to slow the spread of overwhelming AI development, science innovators discussed the purpose and policy of AI. Due to their prominence in rocket systems, robotics, predictive maintenance, and other components of the Space innovation race, the challenges of AI are leading concerns of emerging Space industry innovators, such as Better Futures' parent company, LiftPort Group, which is the innovator of the Lunar elevator project. Demand, Domination, and the Dominoes of AI Generation Artificial intelligence now dominates innovation thinking. For everything on earth, corresponding systems are built for the near-heaven. All humans hoping to someday reach the far-flung corners of the Galaxy have some interaction with artificial intelligence systems. For, as innovation advances, these systems govern the hundreds of little decisions made in the final few seconds of a rocket launch An Open Letter The Future of Life Institute, a leader in artificial intelligence innovation, issued an “immediate pause” letter, calling on all AI developers to put the pause on “training of AI systems more powerful than GPT-4” for at least 6 months. The petition received signatures from high-profile AI innovators and industry leaders including Tesla and SpaceX founder Elon Musk. A call for a pause followed the discovery of just how powerful OpenAI’s GPT4 is and will be. The petition, which at the time of this report had 2,568 signatures, made a point to note that AI developments should not pause altogether, but that highly powered GPT-4 projects should be paused. The pause “should be public and verifiable and include all key actors,” The Future of Life Institute wrote. The pause would include “generative” AI, Fast Company reported. It would last with enough space for creators to understand if the projects add verifiable helpful value to human development and if they can be contained. Operationalizing AI In March, the New Space Age week conference at MIT Sloan business school featured several panels of the brightest minds in the Space industry. A common thread running through their discussion was the prominent role that AI design now plays in all off-world innovation. James Rebesco, the CEO of Striveworks,was present at the MIT Sloan New Space Age conference on March 17. He described that he was mildly “annoyed” by the concept of trusted AI. To Rebesco, AI is not so much about trust as it is about “operationalizing” AI. Rebesco and others on the panel explained that AI needs to be wrapped in policy to make it have the constraints and necessary protections to use in the real world. Traceless AI For this process to work efficiently AI needs to “disappear” into innovation. The St. Patrick’s day MIT panel discussed processes and policies that will move forward this process of absorption. The challenge innovators face is defining the traceability of AI in the systems and auditing the AI throughout the process. The proposed concept makes AI as “boring” and untraceable as a HEX bolt in a rocket.
0 Comments
Leave a Reply. |
AuthorsBetter Future's employees and interns contribute to this blog. Archives
April 2023
Categories
All
|