Close Menu
    What's Hot

    XRP May Be Preparing for a Significant Rally Despite Lackluster ETF Response

    Jun. 18, 2025

    Compliance as a Catalyst: Essential for Widespread Adoption and the Future of Cryptocurrency Opinion

    Jun. 18, 2025

    Justin Sun and Trump: A Billion-Dollar Cryptocurrency Partnership Founded on Business Deals and Dinners

    Jun. 18, 2025
    Facebook X (Twitter) Instagram
    X (Twitter) Telegram
    ETHDailyETHDaily
    Subscribe
    • Home
    • News
      • Altcoin
      • Bitcoin
      • Blockchain
      • DeFi
      • Ethereum
      • Metaverse
      • NFT
      • Regulation
    • Opinion
    • Markets
    • Featured Articles
    • All Posts
    ETHDailyETHDaily
    Home » Open-source AI is not the ultimate solution—Integrating AI on-chain is Opinion
    Blockchain News

    Open-source AI is not the ultimate solution—Integrating AI on-chain is Opinion

    By adminMay. 6, 2025No Comments5 Mins Read
    Facebook Twitter Pinterest Reddit Telegram LinkedIn Tumblr VKontakte WhatsApp Email
    Open-source AI is not the ultimate solution—Integrating AI on-chain is  Opinion
    Open-source AI is not the ultimate solution—Integrating AI on-chain is Opinion
    Share
    Facebook Twitter Reddit Pinterest Email

    Disclosure:

    The views and opinions expressed here belong solely to the author and do not represent the views and opinions of ethdaily.net’ editorial.

    In January 2025,

    DeepSeek’s R1 surpassed ChatGPT as the most downloaded free app on the US Apple App Store. Unlike proprietary models like ChatGPT, DeepSeek is open-source, meaning anyone can access the code, study it, share it, and use it for their own models.

    This shift has fueled excitement about transparency in AI,

    pushing the industry toward greater openness. Just weeks ago, in February 2025, Anthropic released Claude 3.7 Sonnet, a hybrid reasoning model that’s partially open for research previews, also amplifying the conversation around accessible AI.

    Yet, while these developments drive innovation,

    they also expose a dangerous misconception: that open-source AI is inherently more secure (and safer) than other closed models.

    The promise and the pitfalls

    Open-source AI models like DeepSeek’s R1 and Replit’s latest coding agents show us the power of accessible technology. DeepSeek claims it built its system for just $5.6 million, nearly one-tenth the cost of Meta’s Llama model. Meanwhile, Replit’s Agent, supercharged by Claude 3.5 Sonnet, lets anyone, even non-coders, build software from natural language prompts.

    The implications are huge.

    This means that basically everyone, including smaller companies, startups, and independent developers, can now use this existing (and very robust) model to build new specialized AI applications, including new AI agents, at a much lower cost, faster rate, and with greater ease overall. This could create a new AI economy where accessibility to models is king.

    But where open-source shines—accessibility—

    it also faces heightened scrutiny. Free access, as seen with DeepSeek’s $5.6 million model, democratizes innovation but opens the door to cyber risks. Malicious actors could tweak these models to craft malware or exploit vulnerabilities faster than patches emerge.

    Open-source AI doesn’t lack safeguards by default.

    It builds on a legacy of transparency that has fortified technology for decades. Historically, engineers leaned on “security through obfuscation,” hiding system details behind proprietary walls. That approach faltered: vulnerabilities surfaced, often discovered first by bad actors. Open-source flipped this model, exposing code—like DeepSeek’s R1 or Replit’s Agent—to public scrutiny, fostering resilience through collaboration. Yet, neither open nor closed AI models inherently guarantee robust verification.

    The ethical stakes are just as critical.

    Open-source AI, much like its closed counterparts, can mirror biases or produce harmful outputs rooted in training data. This isn’t a flaw unique to openness; it’s a challenge of accountability. Transparency alone doesn’t erase these risks, nor does it fully prevent misuse. The difference lies in how open-source invites collective oversight, a strength that proprietary models often lack, though it still demands mechanisms to ensure integrity.

    The need for verifiable AI

    For open-source AI to be more trusted, it needs verification. Without it, both open and closed models can be altered or misused, amplifying misinformation or skewing automated decisions that increasingly shape our world. It’s not enough for models to be accessible; they must also be auditable, tamper-proof, and accountable.

    By using distributed networks,

    blockchains can certify that AI models remain unaltered, their training data stays transparent, and their outputs can be validated against known baselines. Unlike centralized verification, which hinges on trusting one entity, blockchain’s decentralized, cryptographic approach stops bad actors from tampering behind closed doors. It also flips the script on third-party control, spreading oversight across a network and creating incentives for broader participation, unlike today, where unpaid contributors fuel trillion-token datasets without consent or reward, then pay to use the results.

    A blockchain-powered verification framework brings layers of security and transparency to open-source AI.

    Storing models on-chain or via cryptographic fingerprints ensures modifications are tracked openly, letting developers and users confirm they’re using the intended version.

    Capturing training data origins on a blockchain proves models draw from unbiased, quality sources,

    cutting risks of hidden biases or manipulated inputs. Plus, cryptographic techniques can validate outputs without exposing personal data users share (often unprotected), balancing privacy with trust as models strengthen.

    Blockchain’s transparent, tamper-resistant nature offers the accountability open-source AI desperately needs.

    Where AI systems now thrive on user data with little protection, blockchain can reward contributors and safeguard their inputs. By weaving in cryptographic proofs and decentralized governance, we can build an AI ecosystem that’s open, secure, and less beholden to centralized giants.

    AI’s future is based on trust… onchain

    Open-source AI is an important piece of the puzzle, and the AI industry should work to achieve even more transparency—but being open-source is not the final destination.

    The future of AI and its relevance will be built on trust,

    not just accessibility. And trust can’t be open-sourced. It must be built, verified, and reinforced at every level of the AI stack. Our industry needs to focus its attention on the verification layer and the integration of safe AI. For now, bringing AI on-chain and leveraging blockchain tech is our safest bet for building a more trustworthy future.


    .author-card__photo

    David Pinger

    David Pinger is the co-founder and CEO of Warden Protocol, a company that focuses on bringing safe AI to web3. Before co-founding Warden, he led research and development at Qredo Labs, driving web3 innovations such as stateless chains, webassembly, and zero-knowledge proofs. Before Qredo, he held roles in product, data analytics, and operations at both Uber and Binance. David began his career as a financial analyst in venture capital and private equity, funding high-growth internet startups. He holds an MBA from Pantheon-Sorbonne University.


    .author-card__social.author-card__content

    Share. Facebook Twitter Pinterest LinkedIn Reddit Email
    Previous ArticleMaximalists Advise Against Selling Bitcoin, Yet Why Are They So Preoccupied with Its Price?
    Next Article Ethereum’s Stablecoin Market Capitalization Surges by 1 Million Times Since ETH First Reached $1,400

    Related Posts

    XRP May Be Preparing for a Significant Rally Despite Lackluster ETF Response

    Jun. 18, 2025

    Compliance as a Catalyst: Essential for Widespread Adoption and the Future of Cryptocurrency Opinion

    Jun. 18, 2025

    Justin Sun and Trump: A Billion-Dollar Cryptocurrency Partnership Founded on Business Deals and Dinners

    Jun. 18, 2025

    Meta Pool Exploited for $133K After Attacker Mints Tokens Valued at $27 Million

    Jun. 18, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Opinion Unlocking Crypto Wallets Without Private Keys or Seed Phrases

    Sep. 5, 202492 Views

    EU Exchanges Ditching Stablecoins Is This the End for Stablecoins

    Jun. 21, 202415 Views

    Reasons behind ai16Z’s 35% surge today

    Jan. 14, 202510 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Don't Miss
    DeFi

    XRP May Be Preparing for a Significant Rally Despite Lackluster ETF Response

    Jun. 18, 2025

    XRP Price Continues to Decline Despite ETF ApprovalXRP price continued to slide for the second strai…

    Compliance as a Catalyst: Essential for Widespread Adoption and the Future of Cryptocurrency Opinion

    Jun. 18, 2025

    Justin Sun and Trump: A Billion-Dollar Cryptocurrency Partnership Founded on Business Deals and Dinners

    Jun. 18, 2025

    Meta Pool Exploited for $133K After Attacker Mints Tokens Valued at $27 Million

    Jun. 18, 2025
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    About Us
    About Us

    ETHDaily is your daily cryptocurrencies nutrition and more. Stay updated with the latest insights, developments of cryptocurrencies.

    X (Twitter) Telegram
    Most Popular

    Opinion Unlocking Crypto Wallets Without Private Keys or Seed Phrases

    Sep. 5, 202492 Views

    EU Exchanges Ditching Stablecoins Is This the End for Stablecoins

    Jun. 21, 202415 Views

    Reasons behind ai16Z’s 35% surge today

    Jan. 14, 202510 Views
    © 2025 ETHDaily All rights reserved.
    • Home
    • Markets
    • News
    • Opinion
    • Featured Articles

    Type above and press Enter to search. Press Esc to cancel.