Since we launched Amazon Nova customization in Amazon SageMaker AI at AWS NY Summit 2025, prospects have been asking for a similar capabilities with Amazon Nova as they do once they customise open weights fashions in Amazon SageMaker Inference. Additionally they wished have extra management and adaptability in customized mannequin inference over occasion sorts, auto-scaling insurance policies, context size, and concurrency settings that manufacturing workloads demand.
Immediately, we’re asserting the overall availability of customized Nova mannequin help in Amazon SageMaker Inference, a production-grade, configurable, and cost-efficient managed inference service to deploy and scale full-rank personalized Nova fashions. Now you can expertise an end-to-end customization journey to coach Nova Micro, Nova Lite, and Nova 2 Lite fashions with reasoning capabilities utilizing Amazon SageMaker Coaching Jobs or Amazon HyperPod and seamlessly deploy them with managed inference infrastructure of Amazon SageMaker AI.
With Amazon SageMaker Inference for customized Nova fashions, you possibly can scale back inference value by means of optimized GPU utilization utilizing Amazon Elastic Compute Cloud (Amazon EC2) G5 and G6 situations over P5 situations, auto-scaling primarily based on 5-minute utilization patterns, and configurable inference parameters. This characteristic permits deployment of personalized Nova fashions with continued pre-training, supervised fine-tuning, or reinforcement fine-tuning on your use instances. It’s also possible to set superior configurations about context size, concurrency, and batch dimension for optimizing the latency-cost-accuracy tradeoff on your particular workloads.
Let’s see how one can deploy personalized Nova fashions on SageMaker AI real-time endpoints, configure inference parameters, and invoke your fashions for testing.
Deploy customized Nova fashions in SageMaker Inference
At AWS re:Invent 2025, we launched new serverless customization in Amazon SageMaker AI for common AI fashions together with Nova fashions. With a couple of clicks, you possibly can seamlessly choose a mannequin and customization approach, and deal with mannequin analysis and deployment. If you have already got a skilled customized Nova mannequin artifact, you possibly can deploy the fashions on SageMaker Inference by means of the SageMaker Studio or SageMaker AI SDK.
Within the SageMaker Studio, select a skilled Nova mannequin in Fashions in your fashions within the Fashions menu. You may deploy the mannequin by selecting Deploy button, SageMaker AI and Create new endpoint.

Select the endpoint identify, occasion sort, and superior choices resembling occasion rely, max occasion rely, permission and networking, and Deploy button. At GA launch, you need to use g5.12xlarge, g5.24xlarge, g5.48xlarge, g6.12xlarge, g6.24xlarge, g6.48xlarge, and p5.48xlarge occasion sorts for the Nova Micro mannequin, g5.48xlarge, g6.48xlarge, and p5.48xlarge for the Nova Lite mannequin, and p5.48xlarge for the Nova 2 Lite mannequin.

Creating your endpoint requires time to provision the infrastructure, obtain your mannequin artifacts, and initialize the inference container.
After mannequin deployment completes and the endpoint standing exhibits InService, you possibly can carry out real-time inference utilizing the brand new endpoint. To check the mannequin, select the Playground tab and enter your immediate within the Chat mode.

It’s also possible to use the SageMaker AI SDK to create two assets: a SageMaker AI mannequin object that references your Nova mannequin artifacts, and an endpoint configuration that defines how the mannequin might be deployed.
The next code pattern creates a SageMaker AI mannequin that references your Nova mannequin artifacts. For supported container photos by Area, refer desk lists the container picture URIs:
# Create a SageMaker AI mannequin
model_response = sagemaker.create_model(
ModelName="Nova-micro-ml-g5-12xlarge",
PrimaryContainer={
'Picture': '708977205387.dkr.ecr.us-east-1.amazonaws.com/nova-inference-repo:v1.0.0',
'ModelDataSource': {
'S3DataSource': {
'S3Uri': 's3://your-bucket-name/path/to/mannequin/artifacts/',
'S3DataType': 'S3Prefix',
'CompressionType': 'None'
}
},
# Mannequin Parameters
'Atmosphere': {
'CONTEXT_LENGTH': 8000,
'MAX_CONCURRENCY': 16,
'DEFAULT_TEMPERATURE': 0.0,
'DEFAULT_TOP_P': 1.0
}
},
ExecutionRoleArn=SAGEMAKER_EXECUTION_ROLE_ARN,
EnableNetworkIsolation=True
)
print("Mannequin created efficiently!")
Subsequent, create an endpoint configuration that defines your deployment infrastructure and deploy your Nova mannequin by making a SageMaker AI real-time endpoint. This endpoint will host your mannequin and supply a safe HTTPS endpoint for making inference requests.
# Create Endpoint Configuration
production_variant = {
'VariantName': 'major',
'ModelName': 'Nova-micro-ml-g5-12xlarge',
'InitialInstanceCount': 1,
'InstanceType': 'ml.g5.12xlarge',
}
config_response = sagemaker.create_endpoint_config(
EndpointConfigName="Nova-micro-ml-g5-12xlarge-Config",
ProductionVariants= production_variant
)
print("Endpoint configuration created efficiently!")
# Deploy your Noval mannequin
endpoint_response = sagemaker.create_endpoint(
EndpointName="Nova-micro-ml-g5-12xlarge-endpoint",
EndpointConfigName="Nova-micro-ml-g5-12xlarge-Config"
)
print("Endpoint creation initiated efficiently!")
After the endpoint is created, you possibly can ship inference requests to generate predictions out of your customized Nova mannequin. Amazon SageMaker AI helps synchronous endpoints for real-time with streaming/non-streaming modes and asynchronous endpoints for batch processing.
For instance, the next code creates streaming completion format for textual content era:
# Streaming chat request with complete parameters
streaming_request = {
"messages": [
{"role": "user", "content": "Compare our Q4 2025 actual spend against budget across all departments and highlight variances exceeding 10%"}
],
"max_tokens": 512,
"stream": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"logprobs": True,
"top_logprobs": 2,
"reasoning_effort": "low", # Choices: "low", "excessive"
"stream_options": {"include_usage": True}
}
invoke_nova_endpoint(streaming_request)
def invoke_nova_endpoint(request_body):
"""
Invoke Nova endpoint with computerized streaming detection.
Args:
request_body (dict): Request payload containing immediate and parameters
Returns:
dict: Response from the mannequin (for non-streaming requests)
None: For streaming requests (prints output instantly)
"""
physique = json.dumps(request_body)
is_streaming = request_body.get("stream", False)
attempt:
print(f"Invoking endpoint ({'streaming' if is_streaming else 'non-streaming'})...")
if is_streaming:
response = runtime_client.invoke_endpoint_with_response_stream(
EndpointName=ENDPOINT_NAME,
ContentType="software/json",
Physique=physique
)
event_stream = response['Body']
for occasion in event_stream:
if 'PayloadPart' in occasion:
chunk = occasion['PayloadPart']
if 'Bytes' in chunk:
information = chunk['Bytes'].decode()
print("Chunk:", information)
else:
# Non-streaming inference
response = runtime_client.invoke_endpoint(
EndpointName=ENDPOINT_NAME,
ContentType="software/json",
Settle for="software/json",
Physique=physique
)
response_body = response['Body'].learn().decode('utf-8')
outcome = json.hundreds(response_body)
print("✅ Response obtained efficiently")
return outcome
besides ClientError as e:
error_code = e.response['Error']['Code']
error_message = e.response['Error']['Message']
print(f"❌ AWS Error: {error_code} - {error_message}")
besides Exception as e:
print(f"❌ Sudden error: {str(e)}")
To make use of full code examples, go to Getting began with customizing Nova fashions on SageMaker AI. To be taught extra about finest practices on deploying and managing fashions, go to Finest practices for SageMaker AI.
Now obtainable
Amazon SageMaker Inference for customized Nova fashions is obtainable at the moment in US East (N. Virginia) and US West (Oregon) AWS Areas. For Regional availability and a future roadmap, go to the AWS Capabilities by Area.
The characteristic helps Nova Micro, Nova Lite, and Nova 2 Lite fashions with reasoning capabilities, operating on EC2 G5, G6, and P5 situations with auto-scaling help. You pay just for the compute situations you utilize, with per-hour billing and no minimal commitments. For extra info, go to Amazon SageMaker AI Pricing web page.
Give it a attempt in Amazon SageMaker AI console and ship suggestions to AWS re:Publish for SageMaker or by means of your regular AWS Help contacts.
— Channy


