You can use the /get_endpoint_workers/
and /get_autogroup_workers/
endpoints to get a list of
workers under an endpoint group and autoscaling group respectively. Both of these endpoints
take an ID. You must include your API key either in the headers as a bearer token or in the payload
with key "api_token"
.
Example payload:
1
2
3
4
{
"id": 123,
"api_key": "$API_KEY"
}
These values are returned:
cur_load
: Current number of tokens worker is receiving per secondcur_load_rolling_avg
: Rolling average of cur_load
cur_perf
:disk_usage
: Storage used by instance(in Gb)dlperf
: Measured DLPerf of the instanceid
: Instance IDloaded_at
: Unix epoch time the instance finished loadingmeasured_perf
: Benchmarked performances (tokens/s). Set to DLPerf if instance is not
benchmarked yetperf
: measured_perf
* reliability
reliability
: Uptime of the instance, ranges 0-1reqs_working
: Number of active requests currently being processed by the instancestatus
: Status of the worker, can be: (
starting
,
loading
,
running
,
idle
,
stop_queued
,
stopping
,
stopped
,
unavail
,
error
,
)Example response:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[
{
"cur_load": 150,
"cur_load_rolling_avg": 50,
"cur_perf": 80,
"disk_usage": 30,
"dlperf": 105.87206734930771,
"id": 123456,
"loaded_at": 1724275993.997,
"measured_perf": 105.87206734930771,
"perf": 100.5784639818423245,
"reliability": 0.95,
"reqs_working": 2,
"status": "running"
}
]