First “modern and powerful” open source LLM?

Key features

  • Fully open model: open weights + open data + full training details including all data and training recipes
  • Massively Multilingual: 1811 natively supported languages
  • Compliant Apertus is trained while respecting opt-out consent of data owners (even retrospectivey), and avoiding memorization of training data
  • KubeRoot@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    2
    ·
    3 days ago

    Apertus was developed with due consideration to Swiss data protection laws, Swiss copyright laws, and the transparency obligations under the EU AI Act. Particular attention has been paid to data integrity and ethical standards: the training corpus builds only on data which is publicly available. It is filtered to respect machine-readable opt-out requests from websites, even retroactively, and to remove personal data, and other undesired content before training begins.

    We probably won’t get better, but sounds like it’s still being trained on scraped data unless you explicitly opt out, including anything that may be getting mirrored by third parties that don’t opt out. Also, they can remove data from the training material retroactively… But presumably won’t be retraining the model from scratch, which means it will still have that in their weights, and the official weights will still have a potential advantage on models trained later on their training data.

    From the license:

    SNAI will regularly provide a file with hash values for download which you can apply as an output filter to your use of our Apertus LLM. The file reflects data protection deletion requests which have been addressed to SNAI as the developer of the Apertus LLM. It allows you to remove Personal Data contained in the model output.

    Oof, so they’re basically passing on data protection deletion requests to the users and telling them all to respectfully account for them.

    They also claim “open data”, but I’m having trouble finding the actual training data, only the “Training data reconstruction scripts”…

    • lime!@feddit.nu
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      3 days ago

      that’s the problem with deletion requests, the data isn’t in there. it can’t be, from a purely mathematical standpoint. statistically, with the amount of stuff that goes into training, any full work included in an llm is represented by less than one bit. but the model just… remakes sensitive information from scratch. ih reconstructs infringing data based on patterns.

      which of course highlights the big issue with data anonymization: it can’t really be done.