/
Search
๐Ÿ“–

you generate a baseline model quality that you can use to continuously monitor model quality against. To generate the model quality baseline, you first invoke the endpoint created earlier using validation data. Predictions from the deployed model using this validation data are used as a baseline dataset. You can use either the training or validation dataset to create the baseline.

์ถœ์ฒ˜
์ˆ˜์ง‘์‹œ๊ฐ„
2023/01/06 14:18
์—ฐ๊ฒฐ์™„๋ฃŒ
1 more property
probability,prediction,label 0.01516005303710699,0,0 0.1684480607509613,0,0 0.21427156031131744,0,0 0.06330718100070953,0,0 0.02791607193648815,0,0 0.014169521629810333,0,0 0.00571369007229805,0,0 0.10534518957138062,0,0 0.025899196043610573,0,0
Python
๋ณต์‚ฌ
test_data/validation_with_predictions.csv
model_quality_monitor = ModelQualityMonitor( ... ) job = model_quality_monitor.suggest_baseline( job_name=baseline_job_name, baseline_dataset=baseline_dataset_uri, # test_data/validation_with_predictions.csv dataset_format=DatasetFormat.csv(header=True), output_s3_uri = baseline_results_uri, problem_type='BinaryClassification', inference_attribute= "prediction", probability_attribute= "probability", ground_truth_attribute= "label" ) job.wait(logs=False)
Python
๋ณต์‚ฌ