MModeldex
  • Models
  • Providers
  • Benchmarks
  • MCP Servers
  • News
  • Guides
Submit
MModeldex

A trusted intelligence layer for AI models, providers, benchmarks, MCP servers, releases, and community signals.

Live catalogVerified sourcesOperator-ready

Product

  • Models
  • Providers
  • Benchmarks
  • Compare
  • Prompts
  • Find a model
  • Trending
  • Collections
  • News
  • Changelog

Learn

  • New to AI?
  • Best AI by use case
  • Blog
  • Trust & data sources
  • Pricing
  • About
  • Support

Legal

  • Privacy
  • Terms
  • Cookies

Connect

  • GitHub
  • X / Twitter
  • Contact

© 2026 Modeldex — AI market intelligence for builders and operators.

Press ? for keyboard shortcuts.

Home/News

News & Analysis

Editorial coverage, in-depth analysis, and developer guides — 7 articles.

Provider lens:amazonJSON export →Atom feed →

Source lens: Official RSS for trust-aware newsroom browsing, export, and Atom subscriptions.

All sourcesOfficial RSSGoogle News fallback
All categoriesAnalysisGuideNewsResearch
Filtered by tag:#Technical How-toamazonOfficial RSSClear
  • NewsNewsAmazon (AWS)

    Automate repetitive tasks with Amazon Quick Flows

    This post shows you how to build your first AI-powered workflow, using Amazon Quick, starting with a financial analysis tool and progressing to an advanced employee onboarding automation.

    Amazon (AWS)Official RSSOriginal article ↗Feed source ↗Trust notes →
    Apr 27, 2026Jed Lechner
    More Amazon (AWS) coverage →
  • NewsNewsAmazon (AWS)

    Build Strands Agents with SageMaker AI models and MLflow

    In this post, we demonstrate how to build AI agents using Strands Agents SDK with models deployed on SageMaker AI endpoints. You will learn how to deploy foundation models from SageMaker JumpStart, integrate them with Strands Agents, and establish production-grade observability using SageMaker Serverless MLflow for agent tracing. We also cover how to implement A/B testing across multiple model variants and evaluate agent performance using MLflow metrics and show how you can build, deploy, and continuously improve AI agents on infrastructure you control.

Tags

#AI/ML#AWS Batch#AWS Lambda#Advanced (300)#Amazon Nova#Amazon Quick Suite#Amazon SageMaker#Amazon SageMaker AI#Artificial Intelligence#Best Practices#Compute
Amazon (AWS)Official RSSOriginal article ↗Feed source ↗Trust notes →
Apr 27, 2026Dheeraj Hegde
More Amazon (AWS) coverage →
  • NewsNewsAmazon (AWS)

    Cost-effective multilingual audio transcription at scale with Parakeet-TDT and AWS Batch

    In this post, we walk through building a scalable, event-driven transcription pipeline that automatically processes audio files uploaded to Amazon Simple Storage Service (Amazon S3), and show you how to use Amazon EC2 Spot Instances and buffered streaming inference to further reduce costs.

    Amazon (AWS)Official RSSOriginal article ↗Feed source ↗Trust notes →
    Apr 22, 2026Gleb Geinke
    More Amazon (AWS) coverage →
  • NewsNewsAmazon (AWS)

    End-to-end lineage with DVC and Amazon SageMaker AI MLflow apps

    In this post, we show how to combine DVC (Data Version Control), Amazon SageMaker AI, and Amazon SageMaker AI MLflow Apps to build end-to-end ML model lineage. We walk through two deployable patterns — dataset-level lineage and record-level lineage — that you can run in your own AWS account using the companion notebooks.

    Amazon (AWS)Official RSSOriginal article ↗Feed source ↗Trust notes →
    Apr 21, 2026Manuwai Korber
    More Amazon (AWS) coverage →
  • NewsNewsAmazon (AWS)

    Accelerate Generative AI Inference on Amazon SageMaker AI with G7e Instances

    Today, we are thrilled to announce the availability of G7e instances powered by NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs on Amazon SageMaker AI. You can provision nodes with 1, 2, 4, and 8 RTX PRO 6000 GPU instances, with each GPU providing 96 GB of GDDR7 memory. This launch provides the capability to use a single-node GPU, G7e.2xlarge instance to host powerful open source foundation models (FMs) like GPT-OSS-120B, Nemotron-3-Super-120B-A12B (NVFP4 variant), and Qwen3.5-35B-A3B, offering organizations a cost-effective and high-performing option.

    Amazon (AWS)Official RSSOriginal article ↗Feed source ↗Trust notes →
    Apr 20, 2026Hazim Qudah
    More Amazon (AWS) coverage →
  • NewsNewsAmazon (AWS)

    Navigating the generative AI journey: The Path-to-Value framework from AWS

    In this post, we introduce the Generative AI Path-to-Value (P2V) framework, a structured approach to help you move generative AI initiatives from concept to production and sustained value creation.

    Amazon (AWS)Official RSSOriginal article ↗Feed source ↗Trust notes →
    Apr 14, 2026Nitin Eusebius
    More Amazon (AWS) coverage →
  • NewsNewsAmazon (AWS)

    How to build effective reward functions with AWS Lambda for Amazon Nova model customization

    This post demonstrates how Lambda enables scalable, cost-effective reward functions for Amazon Nova customization. You'll learn to choose between Reinforcement Learning via Verifiable Rewards (RLVR) for objectively verifiable tasks and Reinforcement Learning via AI Feedback (RLAIF) for subjective evaluation, design multi-dimensional reward systems that help you prevent reward hacking, optimize Lambda functions for training scale, and monitor reward distributions with Amazon CloudWatch. Working code examples and deployment guidance are included to help you start experimenting.

    Amazon (AWS)Official RSSOriginal article ↗Feed source ↗Trust notes →
    Apr 13, 2026Manoj Gupta
    More Amazon (AWS) coverage →
  • #Generative AI
    #High Performance Computing
    #Intermediate (200)
    #Strands Agents
    #Technical How-to