Openness Index Methodology
Overview
The Artificial Analysis Openness Index is a composite metric that measures the degree to which AI models are openly available and transparently documented. It covers both the availability of model weights and the transparency of the underlying data and methodology used to create the model.
Each model is assessed based on the full set of public first-party information available. Where models are derived from a third-party base model, they may be constrained by the licensing or limited disclosure of the upstream model. For incremental or update releases, only disclosures explicitly about the new release are considered (including allowing model creators to declare which components remain consistent with an earlier release).
A detailed methodology specification is available to download as a PDF.
Index Composition
Each component is scored on a 0–3 qualitative scale based on the best-fitting openness archetype, with each model assessed based on the full set of public first-party information available.
Where models are derived from a third-party base model, they may be constrained by the licensing or limited disclosure of the upstream model. For incremental/update releases, we only consider disclosures explicitly about the new release (including allowing model creators to declare which components remain consistent with an earlier release).
Score Calculation
The final Openness Index score is derived as follows:
- Data components are scored separately for pre-training and post-training, then averaged to give a combined data score (up to 6 possible points across Access and License).
- All component scores are summed, for a maximum raw score of 18.
- The raw score is normalized to a 0–100 scale: