Bitget App
Trade smarter
Buy cryptoMarketsTradeFuturesEarnWeb3SquareMore
Trade
Spot
Buy and sell crypto with ease
Margin
Amplify your capital and maximize fund efficiency
Onchain
Going Onchain, without going Onchain!
Convert
Zero fees, no slippage
Explore
Launchhub
Gain the edge early and start winning
Copy
Copy elite trader with one click
Bots
Simple, fast, and reliable AI trading bot
Trade
USDT-M Futures
Futures settled in USDT
USDC-M Futures
Futures settled in USDC
Coin-M Futures
Futures settled in cryptocurrencies
Explore
Futures guide
A beginner-to-advanced journey in futures trading
Futures promotions
Generous rewards await
Overview
A variety of products to grow your assets
Simple Earn
Deposit and withdraw anytime to earn flexible returns with zero risk
On-chain Earn
Earn profits daily without risking principal
Structured Earn
Robust financial innovation to navigate market swings
VIP and Wealth Management
Premium services for smart wealth management
Loans
Flexible borrowing with high fund security
U.K. Lawmakers Warn: AI Safety Pledges Are Becoming Window Dressing

U.K. Lawmakers Warn: AI Safety Pledges Are Becoming Window Dressing

ainvest2025/08/29 17:48
By:Coin World

- 60 UK lawmakers accuse Google DeepMind of breaching AI safety commitments by delaying detailed safety reports for Gemini 2.5 Pro. - The company released a simplified model card three weeks after launch, lacking transparency on third-party testing and government agency involvement. - Google claims it fulfilled commitments by publishing a technical report months later, but critics argue this undermines trust in safety protocols. - Similar issues at Meta and OpenAI highlight industry-wide concerns about opa

A group of 60 U.K. lawmakers has signed an open letter accusing Google DeepMind of failing to uphold its AI safety commitments, particularly regarding the delayed release of detailed safety information for its Gemini 2.5 Pro model [1]. The letter, published by the political activist group PauseAI, criticizes the company for not providing a comprehensive model card at the time of the model’s release, which is a key document outlining how the model was tested and built [1]. This failure, they argue, constitutes a breach of the Frontier AI Safety Commitments made at an international summit in February 2024, where signatories—including Google—pledged to publicly report on model capabilities, risk assessments, and third-party testing involvement [1].

Google released Gemini 2.5 Pro in March 2025 but did not publish a full model card at that time, despite claiming the model outperformed competitors on key benchmarks [1]. Instead, a simplified six-page model card was released three weeks later, which some AI governance experts described as insufficient and concerning [1]. The letter highlights that the document lacked substantive detail about external evaluations and did not confirm whether government agencies, such as the U.K. AI Security Institute, were involved in testing [1]. These omissions raise concerns about the transparency of the company’s safety practices.

In response to the criticisms, a Google DeepMind spokesperson previously told Fortune that any suggestion of the company reneging on its commitments was "inaccurate" [1]. The company also stated in May that a more detailed technical report would be published when the final version of the Gemini 2.5 Pro model family became available. A more comprehensive report was eventually released in late June, months after the full version was made available [1]. The spokesperson reiterated that the company is fulfilling its public commitments, including the Seoul Frontier AI Safety Commitments, and that Gemini 2.5 Pro had undergone rigorous safety checks, including evaluations by third-party testers [1].

The letter also notes that the missing model card for Gemini 2.5 Pro appeared to contradict other pledges made by Google, such as the 2023 White House Commitments and a voluntary Code of Conduct on Artificial Intelligence signed in October 2023 [1]. The situation is not unique to Google. Meta faced similar criticism for its minimal and limited model card for the Llama 4 model, while OpenAI opted not to publish a safety report for its GPT-4.1 model, citing its non-frontier status [1]. These developments suggest a broader trend in the industry where safety disclosures are being made less transparent or omitted altogether.

The letter calls on Google to reaffirm its AI safety commitments by clearly defining deployment as the point when a model becomes publicly accessible, committing to publish safety evaluation reports on a set timeline for all future model releases, and providing full transparency for each release by naming the government agencies and independent third parties involved in testing, along with exact testing timelines [1]. Lord Browne of Ladyton, a signatory of the letter and Member of the House of Lords, warned that if leading AI companies treat safety commitments as optional, it could lead to a dangerous race to deploy increasingly powerful AI systems without proper safeguards [1].

Source:

U.K. Lawmakers Warn: AI Safety Pledges Are Becoming Window Dressing image 0
0

Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.

PoolX: Earn new token airdrops
Lock your assets and earn 10%+ APR
Lock now!

You may also like

Toncoin's Strategic Institutional Adoption and Its Impact on Long-Term Value

- Toncoin (TON) accelerates institutional adoption via TSC's $558M PIPE, staking 4.86% yields and leveraging Telegram's 1.8B-user ecosystem for tokenized revenue streams. - Robinhood listing boosts TON liquidity by 60% while U.S./EU regulatory shifts (SEC ETF approval, MiCA) lower barriers for institutional crypto participation. - Staking partnerships with Copper/Kiln expand TON's utility but face risks from 68% whale-controlled supply, contrasting with Ethereum/Solana's institutional inflows in Q3 2025. -

ainvest2025/08/30 02:45
Toncoin's Strategic Institutional Adoption and Its Impact on Long-Term Value

Blockchain’s Role in Democratizing Scientific Innovation: DMD Diamond and the Future of DeSci

- DMD Diamond Blockchain, a Layer 1 infrastructure, leverages blockchain to address systemic inefficiencies in scientific research via decentralized funding and open-access NFT-based publishing. - Its 20x higher throughput than Ethereum, instant finality, and low fees enable scalable scientific workflows, disrupting the $100B academic publishing industry. - With a $800M+ DeSci market target and FDV of BTC79.5309, DMD’s hybrid HBBFT consensus and 12-year blockchain history position it as a sustainable infra

ainvest2025/08/30 02:45
Blockchain’s Role in Democratizing Scientific Innovation: DMD Diamond and the Future of DeSci

XRP’s Strategic Integration with SWIFT: A Game-Changer for Cross-Border Payments

- SWIFT tests Ripple's XRP Ledger for cross-border payments, aiming to integrate blockchain with ISO 20022 standards by 2025. - XRP offers near-instant settlements (<4s), $0.0002 fees, and 1,500 TPS—far outpacing SWIFT's $26–$50 fees and 3–5 day delays. - Institutional adoption grows as XRP bridges forex liquidity gaps, with Ripple's RLUSD stablecoin enabling real-time fiat-crypto conversions. - Analysts estimate a 1% shift in SWIFT's $150T annual volume to XRP could generate $1.5B in transactional demand

ainvest2025/08/30 02:45
XRP’s Strategic Integration with SWIFT: A Game-Changer for Cross-Border Payments