Modeldex
  • Models
  • Providers
  • Benchmarks
  • MCP
  • Compare
  • Guides

Product

  • Models
  • Providers
  • Benchmarks
  • Compare
  • Prompts
  • Find a model
  • Trending
  • Collections
  • News
  • Changelog

Learn

  • New to AI?
  • Best AI by use case
  • Blog
  • Trust & data sources
  • Pricing
  • About
  • Support

Legal

  • Privacy
  • Terms
  • Cookies

Connect

  • GitHub
  • X / Twitter
  • Contact

© 2026 Modeldex — the AI model registry.

Press ? for keyboard shortcuts.

Home/News

News & Analysis

Editorial coverage, in-depth analysis, and developer guides — 5 articles.

AllAnalysisGuideNewsResearch
Filtered by tag:#Technical How-toClear
  • NewsNewsAmazon (AWS)

    Cost-effective multilingual audio transcription at scale with Parakeet-TDT and AWS Batch

    In this post, we walk through building a scalable, event-driven transcription pipeline that automatically processes audio files uploaded to Amazon Simple Storage Service (Amazon S3), and show you how to use Amazon EC2 Spot Instances and buffered streaming inference to further reduce costs.

    Amazon (AWS)Official RSSOriginal article ↗Feed source ↗Trust notes →
    Apr 22, 2026Gleb Geinke
    More Amazon (AWS) coverage →
  • NewsNewsAmazon (AWS)

    End-to-end lineage with DVC and Amazon SageMaker AI MLflow apps

    In this post, we show how to combine DVC (Data Version Control), Amazon SageMaker AI, and Amazon SageMaker AI MLflow Apps to build end-to-end ML model lineage. We walk through two deployable patterns — dataset-level lineage and record-level lineage — that you can run in your own AWS account using the companion notebooks.

    Amazon (AWS)Official RSSOriginal article ↗Feed source ↗Trust notes →
    Apr 21, 2026Manuwai Korber
    More Amazon (AWS) coverage →
  • NewsNewsAmazon (AWS)

    Accelerate Generative AI Inference on Amazon SageMaker AI with G7e Instances

    Today, we are thrilled to announce the availability of G7e instances powered by NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs on Amazon SageMaker AI. You can provision nodes with 1, 2, 4, and 8 RTX PRO 6000 GPU instances, with each GPU providing 96 GB of GDDR7 memory. This launch provides the capability to use a single-node GPU, G7e.2xlarge instance to host powerful open source foundation models (FMs) like GPT-OSS-120B, Nemotron-3-Super-120B-A12B (NVFP4 variant), and Qwen3.5-35B-A3B, offering organizations a cost-effective and high-performing option.

    Amazon (AWS)Official RSSOriginal article ↗Feed source ↗Trust notes →
    Apr 20, 2026Hazim Qudah
    More Amazon (AWS) coverage →
  • NewsNewsAmazon (AWS)

    Navigating the generative AI journey: The Path-to-Value framework from AWS

    In this post, we introduce the Generative AI Path-to-Value (P2V) framework, a structured approach to help you move generative AI initiatives from concept to production and sustained value creation.

    Amazon (AWS)Official RSSOriginal article ↗Feed source ↗Trust notes →
    Apr 14, 2026Nitin Eusebius
    More Amazon (AWS) coverage →
  • NewsNewsAmazon (AWS)

    How to build effective reward functions with AWS Lambda for Amazon Nova model customization

    This post demonstrates how Lambda enables scalable, cost-effective reward functions for Amazon Nova customization. You'll learn to choose between Reinforcement Learning via Verifiable Rewards (RLVR) for objectively verifiable tasks and Reinforcement Learning via AI Feedback (RLAIF) for subjective evaluation, design multi-dimensional reward systems that help you prevent reward hacking, optimize Lambda functions for training scale, and monitor reward distributions with Amazon CloudWatch. Working code examples and deployment guidance are included to help you start experimenting.

    Amazon (AWS)Official RSSOriginal article ↗Feed source ↗Trust notes →
    Apr 13, 2026Manoj Gupta
    More Amazon (AWS) coverage →

Tags

#AI/ML#AWS Batch#AWS Lambda#Advanced (300)#Amazon Nova#Amazon SageMaker#Amazon SageMaker AI#Artificial Intelligence#Best Practices#Compute#Generative AI#High Performance Computing
#Intermediate (200)
#Technical How-to