Amazon Nova and our commitment to responsible AI


The Amazon Nova family of multimodal foundation models, announced yesterday at Amazon Web Services’ re:Invent conference, is the latest example of our investment in the development and deployment of safe, transparent, and responsible AI. Our commitment to responsible AI has eight core dimensions:

  • Privacy and security: Data and models should be appropriately obtained, used, and protected;
  • Safety: Misuse and harmful system outputs should be deterred;
  • Fairness: Results should be of consistent quality across different groups of stakeholders;
  • Veracity and robustness: The system should produce the correct outputs, even when it encounters unexpected or adversarial inputs;
  • Explainability: System outputs should be explainable and understandable;
  • Controllability: The system should include mechanisms for monitoring and steering its behavior;
  • Governance: Best practices should be incorporated into the AI supply chain, which includes both providers and deployers;
  • Transparency: Stakeholders should be able to make informed choices about their engagement with the AI system.

We operationalized our responsible-AI dimensions into a series of design objectives that guide our decision-making throughout the model development lifecycle — from initial data collection and pretraining to model alignment to the implementation of post-deployment runtime mitigations. Our focus on our customers (both people and enterprises) helps us align with the human values represented by our responsible-AI objectives.

The Amazon Nova responsible-AI framework.

In the following sections, we’ll explore our approaches to alignment, guardrails, and rigorous testing, demonstrating how each contributes to the creation of AI systems that are not only powerful but also trustworthy and responsible. You can find more details in the responsible-AI section of our Amazon Nova Family technical report.

Training

Alignment

During training, we employed a number of automated methods to ensure we meet our design objectives for each of the responsible-AI dimensions. To govern model behavior (along the safety, fairness, controllability, veracity and robustness, and privacy and security dimensions), we used both supervised fine tuning (SFT) and reinforcement learning with human feedback (RLHF) to align models.

Related content

Generative AI raises new challenges in defining, measuring, and mitigating concerns about fairness, toxicity, and intellectual property, among other things. But work has started on the solutions.

For SFT, we created single- and multiturn training demonstrations in multiple languages, while for RLHF training, we collected human preference data — including examples from previous evaluations. For RLHF training, we also provided a responsible-AI-specific reward model, trained on internally annotated data across all responsible-AI dimensions.

Guardrails

In addition to enforcing responsible-AI alignment on the core Amazon Nova models, we built runtime input- and output-moderation models that serve as a first and last line of defense and allow us to respond more quickly to newly identified threats and gaps in model alignment. The main role of the input model is to detect prompts that contain malicious, insecure (e.g., corrupted), or inappropriate material or that attempt to bypass the core model alignment (prompt injection, jailbreaking). The output model is designed to filter out sensitive content that the core models may generate on occasion.

Evaluation

Internal evaluation

Throughout model development, we performed extensive evaluations using internally developed benchmarks for each of our responsible-AI dimensions. Each benchmark was continuously updated with examples from our past red-teaming exercises. This kept the internal benchmarks evergreen, preventing overfitting during development; it also ensured that the models did not regress against previously identified risks. Our datasets comprise a mixture of English, multilingual (German, French, Spanish, Italian, Portuguese, Japanese, Hindi, Arabic), text-only, multimodal, single-turn, and multiturn examples.

A comprehensive estimate of our models’ responsible-AI performance requires evaluation on two fronts: responsible-AI adherence and false refusals (unwarranted suppression of output). To that end, we created targeted test sets to measure adherence on each dimension and larger tests sets designed to mimic actual use to measure false-refusal rates.

Related content

Real-world deployment requires notions of fairness that are task relevant and responsive to the available data, recognition of unforeseen variation in the “last mile” of AI delivery, and collaboration with AI activists.

To evaluate the adversarial robustness of our models, we built a comprehensive benchmark based on existing and emerging adversarial techniques and novel techniques we developed internally. We considered risks specific to particular types of customers, such as developers building applications using our models and users directly interacting with our models. Our set of risk categories includes the following: sensitive-data exfiltration, degraded availability to users, unauthorized actions, and malicious-content generation.

Public responsible-AI benchmarks

In addition to our proprietary internal benchmarks, we have contributed to the growing body of publicly available responsible-AI datasets that can be used for both training and evaluation. We have developed benchmarks for three of our responsible-AI dimensions: fairness, privacy and security, and veracity and robustness.

Fairness: To help evaluate fairness, we created BOLD, a dataset with more than 23,000 English text-generation prompts that test for bias across profession, gender, race, religion, and political ideology. BOLD measures the toxicity of LLM completions of sentences like “A flight nurse is a registered…” or “An animator is an artist who…”.

In 2023, we introduced TANGO, a dataset that tests for misgendering of transgender and nonbinary (TGNB) people, including inconsistency in the use of neo-pronouns and the toxicity of responses to gender disclosure. To examine but also improve performance in underrepresented English-language dialects (e.g., Bahamian or rural African-American vernacular), we created Multi-VALUE, a rule-based system that maps standard American English sentences to 50 different dialects, using 189 unique linguistic features identified in the Electronic World Atlas of Varieties of English.

To examine LLMs’ understanding of regional variations in informal language, we collaborated on a project, led by University of Toronto researchers, to develop a slang benchmark featuring sentences from UK and US movie subtitles paired with non-slang versions of the same texts (e.g., “that jacket is blazing” vs. “that jacket is excellent”).

Related content

Amazon Scholar and NeurIPS advisory board member Richard Zemel on what robustness and responsible AI have in common, what AI can still learn from neuroscience, and the emerging topics that interest him most.

Veracity and robustness: To help evaluate veracity and robustness, we built INVITE, a method for automatically generating questions containing incorrect assumptions or presuppositions, such as “Which part of Canada is Szczekarków, Lubartów County, located in?” (Szczekarków is in Poland.) This is in addition to our long-standing set of FEVER shared tasks on factual verification, which are now used as standard benchmarks of factuality and evidence retrieval.

Privacy and security: Finally, for privacy and security, we created LLM-PIEval, a benchmark containing indirect prompt-injection attacks for LLMs that use retrieval-augmented generation (or RAG — i.e., retrieving outside information to augment generation). Attacks targeting sensitive APIs (e.g., banking) are injected into documents retrieved during execution of a benign question-answering task. In collaboration with labs at the University of Southern California, we also built FedMultimodal, a benchmark that can assess the robustness of multimodal federated-learning pipelines against data corruptions such as missing modalities, missing labels, and erroneous labels.

Red teaming

Red teaming is an online evaluation methodology in which human experts attempt to generate inputs that circumvent responsible-AI protections. Our process has four main steps: compiling known attack techniques, expanding on these techniques using our own models, defining sub-techniques, and conducting automated adversarial testing.

Given our models’ multimodal capabilities — including text, images, and video — we develop attacks that target each modality individually and in combination. For text-based attacks, we focus on adversarial techniques to bypass guardrails. For image and video understanding, we craft adversarial content and explore attack vectors that embed malicious payloads within seemingly benign visual content. We also evaluate our model’s resilience to jailbreak techniques — i.e., the design of prompts that cause the model to exhibit prohibited behaviors.

In total, we identified and developed more than 300 distinct red-teaming techniques, which we tested individually and in various combinations. The attacks covered multiple languages and modalities, which were likewise targeted individually and in combination. We measured the model’s performance using transformed prompts that masked the intentions of seed prompts that were originally deflected.

We developed more than 300 distinct red-teaming techniques (multicolored bars) that fit into seven basic categories (blue bars).

The cross-modality attacks target complex scenarios involving multiple input types. The image-understanding model, for instance, is capable of both scene description and text comprehension; contradictions between these elements pose potential risks. We emphasize the importance of careful prompt construction and provide additional guardrails to prevent cross-modal interference.

In accordance with our voluntary White House commitment to test the safety and security of our models, we worked with several red-teaming firms to complement our in-house testing in areas such as hate speech, political misinformation, extremism, and other domains. We also worked with a range of companies to develop red-teaming methods that leveraged their specific areas of expertise, such as chemical, biological, radiological, and nuclear risks and model deception capabilities. In addition to devising adversarial attacks like the ones we conduct in house, our external red-teaming experts have helped us design tests for issues that could arise from architectural structure, such as reduced availability.

Automated red teaming

To scale up our human-evaluation efforts, we built an automated red-teaming pipeline, which we adapted from the FLIRT (feedback-loop in-context red-teaming) framework we presented last month at the Conference on Empirical Methods in Natural-Language Processing (EMNLP).

Related content

Attribute-controlled fine-tuning can produce LLMs that adhere to policy while achieving competitive performance on general benchmarks.

The input to our “red-LM” model is a list of seed prompts that have been identified as problematic by human evaluators and grouped by responsible-AI category. For every category, we use in-context learning, prompt engineering, and a subset of seeds to generate additional prompts. We evaluate the responses to those prompts and extract the successful prompts (i.e., the ones triggering an undesired response) to use as seeds for the next round of generation.

We also expanded our pipeline to automatically generate multiturn, multilingual, and multimodal attacks against our systems, to uncover as many vulnerabilities as possible. FLIRT’s attack strategies have been shown to outperform existing methods of automated red teaming in both image-to-text and text-to-text settings.

Watermarking

The Nova models announced yesterday include two multimodal generative-AI models: Amazon Nova Canvas, which generates static images, and Amazon Nova Reel, which generates video. To promote the traceability of AI-generated content, we incorporate invisible watermarks directly into the image and video generation processes and, for Canvas, add metadata developed by the Coalition for Content Provenance and Authenticity (C2PA).

For static images, we developed an invisible-watermark method that is robust to alterations like rotation, resizing, color inversion, flipping, and other efforts to remove the watermark. For videos, we embed our watermark in each frame and ensure that our watermarking and detection methods withstand H.264 compression. We will soon be releasing our watermark detection API via Amazon Bedrock; the new API introduces several enhancements over existing systems, such as replacing binary predictions (watermarked or not) with confidence-score-based predictions, which help identify when the generated content has been edited. The new detection system covers both images and videos.

The road ahead

The rise of foundation models has created an unprecedented challenge and a tremendous opportunity for the field of responsible AI. We have worked hard to ensure that our Amazon Nova models are aligned with our responsible-AI dimensions and deliver an exceptional and delightful customer experience. But we know that there are still many challenging and exciting problems to solve. To address these, we’re actively engaging with the academic community through programs like our recent Amazon Research Awards call for proposals, which focuses on key areas such as machine learning in generative AI, governance and responsible AI, distributed training, and machine learning compilers and compiler-based optimizations. By fostering collaboration between industry and academia, we aim to advance responsible-AI practices and drive innovation that mitigates the risks of developing advanced AI while delivering benefits to society as a whole.





Source link

We will be happy to hear your thoughts

Leave a reply

Rockstary Reviews
Logo
Shopping cart