Python SDK

Python SDK reference for the Infrared City simulation platform — wind, solar, thermal comfort, area analyses, buildings, vegetation, ground materials, and webhook-driven async jobs.

Python SDK for the Infrared City simulation platform. Run urban microclimate analyses — wind, solar, thermal comfort — from a few lines of code.

Notebooks for all 8 analyses, agent skills (Claude Code / Cursor / Codex / Copilot / Windsurf), and runnable Python recipes live at Infrared-city/infrared-skills.

Features:

  • 8 analysis types: wind speed, pedestrian wind comfort, daylight availability, direct sun hours, sky view factors, solar radiation, thermal comfort (UTCI), thermal comfort statistics
  • Area API for multi-tile polygon analysis with automatic tiling, merging, and clipping
  • Buildings API for 3D building data retrieval
  • Vegetation API for tree data retrieval
  • Ground Materials API for surface material layers
  • Weather data integration
  • Async job submission with webhook notifications and persistable schedules
  • Fully typed (PEP 561)

Installation

pip install infrared-sdk

Or with uv:

uv add infrared-sdk

Requirements: Python 3.11+. Dependencies (requests, pydantic, validators, numpy) are installed automatically.

Quick Start

from infrared_sdk import InfraredClient
from infrared_sdk.analyses.types import WindModelRequest, AnalysesName

polygon = {
    "type": "Polygon",
    "coordinates": [[
        [11.570, 48.195], [11.580, 48.195],
        [11.580, 48.201], [11.570, 48.201],
        [11.570, 48.195],
    ]],
}

# api_key and base_url fall back to INFRARED_API_KEY / INFRARED_BASE_URL env vars
with InfraredClient() as client:
    # 1. Fetch buildings for the area
    area = client.buildings.get_area(polygon)

    # 2. Run a wind analysis over the polygon
    result = client.run_area_and_wait(
        WindModelRequest(
            analysis_type=AnalysesName.wind_speed,
            wind_speed=15,
            wind_direction=180,
        ),
        polygon,
        buildings=area.buildings,
    )

    # 3. Result contains a merged grid covering the polygon
    print(f"Grid shape: {result.grid_shape}")

If you are new to the SDK, read in this order:

  1. Examples — runnable notebooks in the public infrared-skills repo.
  2. Output Reference — what each analysis produces, in what units, and how to read the numbers.
  3. Analysis Types — pick the analysis you need and copy the snippet.
  4. Area API → How tiling works — only if your polygon is larger than ~512 m on a side; otherwise tiling is automatic and you can skip it.

Output Reference

Every analysis returns an AreaResult with a 2-D merged_grid (numpy array, ~1 m per cell) covering the polygon. Cells outside the polygon are NaN. The table below lists what each cell value means.

AnalysisCell unitTypical rangePhysical meaning
Wind Speedm/s0–20Steady-state wind magnitude near pedestrian level for one (speed, direction) pair
Pedestrian Wind Comfortcomfort class (int 0–4)Lawson criteriaCategorical comfort/safety class per chosen criterion (e.g. Lawson LDDC: A/B/C/D/E from sit-long to unsafe)
Daylight Availabilityhours0–100Hours of usable daylight at the cell over the chosen TimePeriod
Direct Sun Hourshours0–(period length)Cumulative hours of direct sun over the chosen TimePeriod
Sky View Factorsfraction0–100Portion of the sky hemisphere visible from the cell (1 = fully open, 0 = fully obstructed)
Solar RadiationkWh/m²0–~hundredsCumulative solar irradiance on the ground over the TimePeriod
Thermal Comfort (UTCI)°C (UTCI equivalent)Range based on weather data providedFelt temperature combining air temperature, mean radiant temperature, humidity, and wind
Thermal Comfort Statistics% time0–(period length)Time spent in the chosen band: thermal_comfort, heat_stress, or cold_stress

Configuration

Environment VariableDescriptionDefault
INFRARED_API_KEYYour Infrared API key
INFRARED_BASE_URLAPI base URLhttps://api.infrared.city/v2

Both can also be passed directly to the constructor:

# Explicit — pass credentials directly
client = InfraredClient(api_key="your-key", base_url="https://api.infrared.city/v2")

# Env vars — set INFRARED_API_KEY (and optionally INFRARED_BASE_URL), then:
client = InfraredClient()

InfraredClient supports the context manager protocol (with statement) for automatic cleanup of HTTP sessions. You can also call client.close() manually.

Geometry Format

All analysis payloads accept a geometries parameter — a dict mapping building identifiers (strings) to DotBim mesh objects:

geometries = {
    "building-001": {
        "mesh_id": 0,
        "coordinates": [x1, y1, z1, x2, y2, z2, ...],  # flat [x, y, z, ...] array in meters
        "indices": [0, 1, 2, 3, 4, 5, ...]              # triangle index array (optional)
    },
    "building-002": {
        "mesh_id": 1,
        "coordinates": [x1, y1, z1, x2, y2, z2, ...],
        "indices": [0, 1, 2, ...]
    },
}

Each mesh entry follows the special DotBim format:

FieldTypeDescription
mesh_idintNumeric mesh identifier
coordinateslist[float]Flat [x, y, z, ...] vertex array. Coordinates are in meters, relative to the tile's south-west corner (see DotBim coordinate system)
indiceslist[int] or NoneTriangle index array (3 indices per face). Optional

You can load geometries from a file or pass the buildings dict returned by the Buildings API:

# Option A: Load from a file
import json
with open("scene.json") as f:
    geometries = json.load(f)

# Option B: Use buildings from the area API
area = client.buildings.get_area(polygon)
geometries = area.buildings  # dict already in the right format

When using run_area_and_wait(), pass buildings separately via the buildings parameter rather than setting geometries on the payload. The SDK handles per-tile coordinate transforms and building assignment automatically.

Optional vegetation and ground_materials can also be fetched and passed to run_area_and_wait(). See Vegetation & Ground Materials for details.

Buildings

DotBim coordinate system

Building coordinates use a local meter-space system: x-axis points east, y-axis points north, z is height.

get_area(polygon) fetches buildings from multiple tiles, deduplicates them, and transforms all coordinates so the origin is the polygon bounding-box SW corner — all buildings share one frame regardless of which tile they came from.

When you pass buildings to run_area_and_wait(), the SDK automatically transforms them from the polygon-bbox-SW frame to each tile's local frame. See Building coordinate transforms in the Area API section for the full explanation.

Check dotbimpy for more information on the dotbim file format.

Building retrieval

Fetch 3D building data for a polygon with automatic deduplication across tiles:

area = client.buildings.get_area(polygon)
print(area.total_buildings)
print(area.buildings)  # dict[str, DotBimMesh]

Time Period

Solar, thermal, and wind-comfort analyses require a TimePeriod to define the time window for the simulation. The time period also determines which weather data points are included when filtering from a weather file.

from infrared_sdk.models import TimePeriod

tp = TimePeriod(
    start_month=6, start_day=1, start_hour=9,
    end_month=8, end_day=31, end_hour=17,
)
FieldTypeRangeDescription
start_monthint1-12Start month
start_dayint1-31Start day
start_hourint0-23Start hour
end_monthint1-12End month
end_dayint1-31End day
end_hourint0-23End hour

All 6 fields are required.

day values are validated against the calendar month: April 31, February 30, June 31, September 31, and November 31 raise ValidationError. February 29 is accepted because TimePeriod carries no year context — refusing it would block all valid leap-year windows. The window must also move forward (end > start); year-wrap windows like Nov→Feb are not supported, split them into two periods if you need that behaviour.

How TimePeriod affects weather data

TimePeriod defines a recurring time window applied across every year in your weather file. It works as a three-level cascade filter:

  1. Months — only data from start_month through end_month is considered.
  2. Days — within each of those months, only days from start_day through end_day are kept.
  3. Hours — within each of those days, only hours from start_hour through end_hour are kept.

Every hourly data point that does not fall inside all three windows is discarded. The diagram below illustrates how TimePeriod(start_month=6, start_day=1, start_hour=9, end_month=8, end_day=20, end_hour=17) filters the data.

Result: ~3 months × 20 days × 9 hours = 540 hourly data points per year in the weather file.

Filtering weather data with time period

Which analyses need a TimePeriod

AnalysisTimePeriodWeather Data
Wind SpeedNoNo
Sky View FactorsNoNo
Daylight AvailabilityYesNo
Direct Sun HoursYesNo
Solar RadiationYesYes (radiation arrays)
Thermal Comfort (UTCI)YesYes (temperature, radiation, humidity, wind)
Thermal Comfort StatisticsYesYes (same as UTCI)
Pedestrian Wind ComfortYes (for weather filtering)Yes (wind speed/direction arrays)

Weather Data

Search for nearby weather stations and filter data by time range:

from infrared_sdk.models import TimePeriod

# Find weather stations near a location (radius in km)
locations = client.weather.get_weather_file_from_location(
    lat=48.1983, lon=11.575, radius=50
)
# Returns a list of station dicts:
# [
#     {
#         "uuid": "eb91892c-fbe3-4743-ade5-c22cfb5913e1",
#         "fileName": "DEU_BY_Munich-Theresienwiese.108650_TMYx",
#         "location_data": {
#             "city": "Munich-Theresienwiese", "state": "BY", "country": "DEU",
#             "latitude": 48.1632, "longitude": 11.5429, "elevation": 520.0,
#             "time_zone": 1.0, "station_id": "108650", "source": "SRC-TMYx",
#             "type": "Location",
#         },
#     },
#     ...
# ]

# Use the station's uuid to filter weather data by time range. The
# `identifier` parameter on filter_weather_data is the station uuid.
weather_data = client.weather.filter_weather_data(
    identifier=locations[0]["uuid"],
    time_period=TimePeriod(
        start_month=6, start_day=1, start_hour=9,
        end_month=6, end_day=30, end_hour=17,
    ),
)
# Returns a list[WeatherDataPoint], one per matching hour:
# [
#     WeatherDataPoint(dryBulbTemperature=22.3, windSpeed=3.2, windDirection=180.0,
#                      diffuseHorizontalRadiation=120.0, directNormalRadiation=450.0, ...),
#     WeatherDataPoint(dryBulbTemperature=23.1, windSpeed=4.1, windDirection=195.0, ...),
#     ...
# ]

Extracting fields for analysis payloads

Use extract_weather_fields to convert WeatherDataPoint lists into the flat arrays that analysis payloads expect. Field names are passed in camelCase (matching WeatherDataPoint attributes); the returned dict uses snake_case keys:

from infrared_sdk.models import extract_weather_fields

wind_fields = extract_weather_fields(weather_data, ["windSpeed", "windDirection"])
# Returns: {"wind_speed": [3.2, 4.1, ...], "wind_direction": [180, 195, ...]}

Analyses that require weather data (Solar Radiation, UTCI, TCS, PWC) use the from_weatherfile_payload() class method, which extracts the required weather arrays from the data points and constructs the full request automatically.

Analysis Types

All analysis types follow the same pattern: construct a request, call client.run_area_and_wait() with a polygon and buildings, get an AreaResult.

The AreaResult contains a merged_grid (numpy array), min_legend / max_legend for color scale bounds, and metadata about succeeded/failed tiles. See AreaResult for the full schema.

Wind Speed

Simulates the steady-state wind field around buildings for a single inflow condition. Output cells are wind magnitude in m/s near pedestrian height.

ParameterTypeRangeDescription
wind_speedint1-100Inflow wind speed (m/s)
wind_directionint0-360Inflow direction (degrees, meteorological convention: 0 = wind from north)
from infrared_sdk.analyses.types import WindModelRequest, AnalysesName

payload = WindModelRequest(
    analysis_type=AnalysesName.wind_speed,
    wind_speed=15,
    wind_direction=180,
)
result = client.run_area_and_wait(payload, polygon, buildings=area.buildings)

Pedestrian Wind Comfort (PWC)

Wind comfort classification using standard criteria. Requires wind speed and direction arrays from weather data.

ParameterTypeDescription
criteriaPwcCriteriaClassification standard (see below)
wind_speedlist[float]Wind speed time series from weather data
wind_directionlist[float]Wind direction time series from weather data

Available criteria: vdi_387, lawson_1970, lawson_2001, lawson_lddc, davenport, nen_8100_comfort, nen_8100_safety

from infrared_sdk.analyses.types import PwcModelRequest, PwcCriteria, AnalysesName
from infrared_sdk.models import TimePeriod, extract_weather_fields

weather_data = client.weather.filter_weather_data(
    identifier="your-weather-file-id",
    time_period=TimePeriod(
        start_month=6, start_day=1, start_hour=9,
        end_month=6, end_day=30, end_hour=17,
    ),
)
wind_fields = extract_weather_fields(weather_data, ["windSpeed", "windDirection"])

payload = PwcModelRequest(
    analysis_type=AnalysesName.pedestrian_wind_comfort,
    criteria=PwcCriteria.lawson_2001,
    **wind_fields,
)
result = client.run_area_and_wait(payload, polygon, buildings=area.buildings)

Daylight Availability

Simulates daylight availability at a location over a time period.

ParameterTypeRangeDescription
latitudefloat-90 to 90Location latitude
longitudefloat-180 to 180Location longitude
time_periodTimePeriodAnalysis time window
from infrared_sdk.analyses.types import SolarModelRequest, AnalysesName
from infrared_sdk.models import TimePeriod

payload = SolarModelRequest(
    analysis_type=AnalysesName.daylight_availability,
    latitude=48.1983,
    longitude=11.575,
    time_period=TimePeriod(
        start_month=6, start_day=1, start_hour=9,
        end_month=6, end_day=30, end_hour=17,
    ),
)
result = client.run_area_and_wait(payload, polygon, buildings=area.buildings)

Direct Sun Hours

Simulates direct sun hours. Same parameters as Daylight Availability.

payload = SolarModelRequest(
    analysis_type=AnalysesName.direct_sun_hours,
    latitude=48.1983,
    longitude=11.575,
    time_period=TimePeriod(
        start_month=6, start_day=1, start_hour=9,
        end_month=6, end_day=30, end_hour=17,
    ),
)
result = client.run_area_and_wait(payload, polygon, buildings=area.buildings)

Sky View Factors (SVF)

Calculates sky view factors. Geometry-only — no time period or weather data needed.

ParameterTypeRangeDescription
latitudefloat-90 to 90Optional. Tile-centroid latitude used by the vegetation validator. SVF inference itself does not read it.
longitudefloat-180 to 180Optional. Tile-centroid longitude used by the vegetation validator. SVF inference itself does not read it.
from infrared_sdk.analyses.types import SvfModelRequest, AnalysesName

payload = SvfModelRequest(
    analysis_type=AnalysesName.sky_view_factors,
    latitude=48.1983,    # optional — only needed if you inject vegetation
    longitude=11.575,    # optional — only needed if you inject vegetation
)
result = client.run_area_and_wait(payload, polygon, buildings=area.buildings)

Solar Radiation

Simulates solar radiation. Requires weather data arrays for diffuse horizontal and direct normal radiation.

from infrared_sdk.analyses.types import (
    SolarRadiationModelRequest, BaseAnalysisPayload, AnalysesName,
)
from infrared_sdk.models import TimePeriod, Location

tp = TimePeriod(
    start_month=6, start_day=1, start_hour=9,
    end_month=6, end_day=30, end_hour=17,
)

weather_data = client.weather.filter_weather_data(
    identifier="your-weather-file-id",
    time_period=tp,
)

payload = SolarRadiationModelRequest.from_weatherfile_payload(
    payload=BaseAnalysisPayload(
        analysis_type=AnalysesName.solar_radiation,
    ),
    location=Location(latitude=48.1983, longitude=11.575),
    time_period=tp,
    weather_data=weather_data,
)
result = client.run_area_and_wait(payload, polygon, buildings=area.buildings)

Thermal Comfort Index (UTCI)

Calculates the Universal Thermal Climate Index. Requires filtered weather data.

from infrared_sdk.analyses.types import UtciModelRequest, UtciModelBaseRequest, AnalysesName
from infrared_sdk.models import TimePeriod, Location

tp = TimePeriod(
    start_month=6, start_day=1, start_hour=9,
    end_month=6, end_day=30, end_hour=17,
)

weather_data = client.weather.filter_weather_data(
    identifier="your-weather-file-id",
    time_period=tp,
)

payload = UtciModelRequest.from_weatherfile_payload(
    payload=UtciModelBaseRequest(
        analysis_type=AnalysesName.thermal_comfort_index,
    ),
    location=Location(latitude=48.1983, longitude=11.575),
    time_period=tp,
    weather_data=weather_data,
)
result = client.run_area_and_wait(payload, polygon, buildings=area.buildings)

Thermal Comfort Statistics (TCS)

Aggregated thermal comfort over a time period. Three subtypes: thermal_comfort, heat_stress, cold_stress.

from infrared_sdk.analyses.types import TcsModelBaseRequest, TcsModelRequest, TcsSubtype, AnalysesName
from infrared_sdk.models import TimePeriod, Location

tp = TimePeriod(
    start_month=6, start_day=1, start_hour=9,
    end_month=6, end_day=30, end_hour=17,
)

weather_data = client.weather.filter_weather_data(
    identifier="your-weather-file-id",
    time_period=tp,
)

payload = TcsModelRequest.from_weatherfile_payload(
    payload=TcsModelBaseRequest(
        analysis_type=AnalysesName.thermal_comfort_statistics,
        subtype=TcsSubtype.heat_stress,
    ),
    location=Location(latitude=48.1983, longitude=11.575),
    time_period=tp,
    weather_data=weather_data,
)
result = client.run_area_and_wait(payload, polygon, buildings=area.buildings)

Analysis Names Reference

Enum ValueAPI Name
AnalysesName.wind_speedwind-speed
AnalysesName.pedestrian_wind_comfortpedestrian-wind-comfort
AnalysesName.daylight_availabilitydaylight-availability
AnalysesName.direct_sun_hoursdirect-sun-hours
AnalysesName.sky_view_factorssky-view-factors
AnalysesName.solar_radiationsolar-radiation
AnalysesName.thermal_comfort_indexthermal-comfort-index
AnalysesName.thermal_comfort_statisticsthermal-comfort-statistics

Vegetation & Ground Materials

The SDK can fetch vegetation (trees) and ground material layers (asphalt, grass, water, etc.) for a polygon. Fetch them explicitly and pass to run_area_and_wait():

# Fetch vegetation (trees from OSM)
area_veg = client.vegetation.get_area(polygon)
print(f"{area_veg.total_trees} trees found")

# Fetch ground materials (from Mapbox)
area_gm = client.ground_materials.get_area(polygon)
print(f"{area_gm.total_features} features found")

# Pass to run_area_and_wait
result = client.run_area_and_wait(
    payload, polygon,
    buildings=area.buildings,
    vegetation=area_veg.features,
    ground_materials=area_gm.layers,
)

Fetch once and reuse across multiple analysis runs over the same polygon to avoid redundant API calls.

Layer parameter behaviour

The buildings, vegetation, and ground_materials parameters on run_area() / run_area_and_wait() are opt-in: nothing is auto-fetched.

ValueBehavior
None (default) or {}Skip — no data of this type is injected into the simulation
{...} (non-empty)Use the provided data

If you need vegetation or ground materials in a simulation, fetch them with the dedicated sub-clients (client.vegetation.get_area(), client.ground_materials.get_area()) and pass the result. Wind / SVF analyses generally don't need them; thermal and solar analyses produce more realistic results when they are included.

Format

FieldFormatCoordinate frame
buildingsDotBim meshes (coordinates flat XYZ list, indices face triplets)polygon-bbox-SW meters; SDK transforms to tile-SW per tile
vegetationGeoJSON Feature dict keyed by OSM id; each Feature has geometry.coordinates = [lon, lat] and OSM tree propertieslon/lat — the inference layer handles projection and any geometry conversion
ground_materialsDict of GeoJSON FeatureCollection keyed by material name (asphalt, concrete, vegetation, water, soil, building)lon/lat — projected server-side

Area API

For multi-tile analyses over large polygons, the area API handles tiling, building assignment, and result merging automatically.

Cost preview

Before running an area analysis, preview how many tiles it will require:

preview = client.preview_area(polygon)
print(f"Tiles: {preview.tile_count}")
print(f"Estimated time: {preview.estimated_time_s}s")
print(f"Estimated cost: {preview.estimated_cost_tokens} tokens")
FieldTypeDescription
tile_countintNumber of non-empty tiles
estimated_time_sfloatEstimated wall-clock time (10s/tile)
estimated_cost_tokensintEstimated token cost (10/tile)

Basic usage

from infrared_sdk import InfraredClient
from infrared_sdk.analyses.types import WindModelRequest, AnalysesName

polygon = {
    "type": "Polygon",
    "coordinates": [[
        [13.4050, 52.5200],
        [13.4110, 52.5200],
        [13.4110, 52.5254],
        [13.4050, 52.5254],
        [13.4050, 52.5200],
    ]],
}

with InfraredClient() as client:
    # Fetch buildings once
    area = client.buildings.get_area(polygon)

    # Run analysis — buildings are reused
    wind_result = client.run_area_and_wait(
        WindModelRequest(
            analysis_type=AnalysesName.wind_speed,
            wind_speed=10, wind_direction=180,
        ),
        polygon,
        buildings=area.buildings,
    )

    print(wind_result.grid_shape)      # e.g. (768, 1024)
    print(wind_result.succeeded_jobs)  # number of tiles that completed

Multi-analysis runs

Run several analysis types over the same polygon in a single parallel batch by passing a list of payloads. All tile submissions across all analysis types are pooled into one shared thread pool, so tiles from different analysis types can be in flight simultaneously:

results = client.run_area_and_wait(
    [wind_payload, svf_payload, solar_payload],
    polygon,
    buildings=area.buildings,
)

# Results are returned as a list in the same order as the input payloads
wind_result = results[0]
svf_result  = results[1]
solar_result = results[2]

The same applies to parameter sweeps of one analysis type — passing a list of payloads with different config (e.g. 8 wind directions) submits all 8 × tile_count jobs through a single shared 20-worker pool rather than running 8 sequential per-direction batches.

Concurrency at scale

  • Per-call cap: the SDK caps in-flight submissions at max_workers (default 20) regardless of how many payloads × tiles you pass. So run_area_and_wait([8 payloads], polygon) with 24 tiles per payload still uses 20 concurrent submissions, not 192. The max_workers argument tunes this per call.
  • Multi-user / multi-process: each InfraredClient instance has its own pool. To go above 20 simultaneous submissions, instantiate multiple clients in separate threads or processes — the API is designed to handle parallel callers.
  • Cold start: the first request in a session typically takes 2–5× longer than subsequent ones (Lambda cold start). Benchmark numbers from warm runs are not representative of first-call latency.
  • Backend limits: the API enforces an account-level concurrency ceiling on simulation execution. Contact support if you regularly need to exceed ~100 simultaneous tile jobs.

Webhooks with multi-payload batches

When webhook_url is set on a multi-payload run, your endpoint will receive up to payloads × tiles events in a tight time window — much denser than per-payload sequential submission. Make sure your endpoint can handle the burst (queue ingestion / batch DB writes recommended).

Polygon requirements

  • GeoJSON Polygon format: {"type": "Polygon", "coordinates": [[[lon, lat], ...]]}
  • Coordinate order: [longitude, latitude] (GeoJSON standard)
  • Single ring, closed, at least 3 unique vertices, no self-intersections
  • Max ~100 non-empty tiles (override with max_tiles_override)

How tiling works

The Infrared API simulates a fixed 512×512 m tile at a time. To analyse a polygon larger than one tile, the SDK splits it into a grid of overlapping tiles, runs each one in parallel, then crops and stitches the results into a single merged grid.

Tile geometry

Every tile has three key dimensions:

ParameterDescription
Inference size (512 m)The area actually simulated by the API. Always 512×512 m, producing a 512×512 cell grid (1 m per cell).
Context sizeThe area used to select which buildings are sent with the tile. May be larger than the inference size so buildings outside the tile that cast shadows or affect wind can be included.
Step sizeThe distance between adjacent tile centres. Controls how much tiles overlap.

These parameters differ between wind and solar model groups:

ConfigInferenceContextStepOverlapCrop
Wind (wind-speed, pedestrian-wind-comfort)512 m512 m256 m50% (256 m)Centre 256×256 cells
Solar (all other types)512 m666 m512 mNone (edge-to-edge)Full 512×512 cells

Why the difference? Wind effects propagate laterally — a building's wind shadow extends far downwind. Dense 50% overlap with centre-cropping ensures each point in the merged grid comes from the most accurate central region of a tile. Solar/daylight analyses need long shadows from distant buildings (hence the wider 666 m context, adding 77 m on each side) but the output itself doesn't benefit from overlap, so tiles are placed edge-to-edge.

wind tiling diagram

solar tiling diagram

Merging

After all tiles complete, the SDK extracts a centre crop from each tile's 512×512 result:

  • Wind: crops the inner 256×256 cells (discards the 128-cell border on each side), then places each crop at its grid position. Adjacent crops meet exactly — no blending needed because each point was computed from the tile where it's most central.
  • Solar: uses the full 512×512 result (no crop), placed edge-to-edge.

Cells outside the input polygon are set to NaN via cell-level point-in-polygon clipping.

Building coordinate transforms

This is the most important piece to understand when working with the area API:

  1. client.buildings.get_area(polygon) fetches buildings from multiple tiles, deduplicates them, and transforms all coordinates into the polygon bounding-box SW frame — the south-west corner of the polygon's bounding box is origin (0, 0), x points east, y points north, values in meters.

geometries/buildings area coordinate

  1. When you pass buildings to run_area_and_wait(), the SDK must assign each building to the tile(s) it overlaps. For each tile, the SDK:
    • Computes the tile's inference SW offset relative to the polygon bbox SW (based on the tile's row/col and the step size)
    • Expands the tile's bounding box by the context margin (0 m for wind, 77 m for solar) — this expanded area is only used to select which buildings to include
    • Tests each building's bounding box against this expanded context area
    • Deep-copies the building and subtracts the inference tile's SW offset from its coordinates, converting from polygon-bbox-SW frame to tile-SW frame

tile buildings coordinates

This means the same building can appear in multiple adjacent tiles (with different coordinates in each), which is correct — the API expects buildings in the tile's local coordinate frame.

If you provide your own buildings to run_area_and_wait(), they must be in the polygon-bbox-SW frame. The SDK handles the per-tile transform automatically. Buildings returned by client.buildings.get_area() are already in this frame.

Other details

  • Parallelism: Up to 20 concurrent API calls per run (configurable via max_workers)
  • Retry: 2 retries with exponential backoff + jitter for HTTP 429/5xx
  • Single-tile bypass: If the polygon fits in one tile, tiling overhead is skipped
  • Projection: Local tangent plane approximation, accurate for city-scale polygons (<50 km span)

AreaResult

FieldTypeDescription
merged_gridnumpy.ndarrayMerged, clipped grid (NaN outside polygon)
polygondictThe source GeoJSON polygon
analysis_typestrWhich analysis type was run
grid_shapetuple[int, int](rows, cols) of merged grid
failed_jobslist[str]Job IDs that failed
skipped_jobslist[str]Job IDs that were skipped (download error, etc.)
total_jobsintTotal number of jobs submitted
succeeded_jobsintNumber of jobs that succeeded
min_legendfloat or NoneMinimum legend value across all tile results, or None if the server didn't supply bounds; fall back to np.nanmin(merged_grid)
max_legendfloat or NoneMaximum legend value across all tile results, or None if the server didn't supply bounds; fall back to np.nanmax(merged_grid)

Serialize for JSON: result.to_dict() (converts the numpy grid to nested lists with NaN replaced by None).

Image Generation

Generate a PNG image from analysis results:

result = client.run_area_and_wait(payload, polygon, buildings=area.buildings)
grid = result.merged_grid.tolist()

img_bytes = client.weather.gen_grid_image(
    grid=grid,
    analysis_type="wind-speed",  # optional: improves color mapping
)

with open("output.png", "wb") as f:
    f.write(img_bytes)

gen_grid_image also accepts optional criteria and subtype parameters for PWC and TCS analyses.

Async Jobs & Webhooks

The SDK supports two execution styles for analyses: synchronous polling — run_area_and_wait() blocks until results are ready — and asynchronous submission — run_area() returns immediately with an AreaSchedule, the API processes jobs in the background, and your service is notified through webhooks (or by manual polling). Pick the style that matches how your code waits for the result.

Prefer async + webhooks when:

  • Long-running, large-area runs where blocking a process for minutes is impractical.
  • Headless / serverless / batch jobs where there is no caller to keep open.
  • Multi-analysis or parameter-sweep batches that submit many tiles at once.
  • Multi-user or fan-out backends where many polygons are scheduled concurrently and a single webhook stream consolidates completions.

Prefer synchronous polling (run_area_and_wait) when:

  • Notebooks or interactive scripts where the result is consumed inline.
  • Small polygons or single-tile runs that complete in seconds.
  • Local development and debugging — no public webhook endpoint required.
  • Environments without a routable webhook URL (corporate networks, ad-hoc machines).

Single-tile primitives

For direct control over a single job — custom polling, replaying jobs from your own queue, or wiring webhooks at the analysis level — use the low-level primitives client.analyses.execute() and client.jobs.*. Most users should reach for run_area() / run_area_and_wait() instead; these primitives exist for advanced workflows.

from infrared_sdk import InfraredClient, WEBHOOK_EVENT_SUCCEEDED, WEBHOOK_EVENT_FAILED
from infrared_sdk.analyses.jobs import JobStatus

with InfraredClient() as client:
    # 1. Submit (returns immediately)
    job = client.analyses.execute(
        payload=payload,
        webhook_url="https://your-server.com/webhooks",
        webhook_events=[WEBHOOK_EVENT_SUCCEEDED, WEBHOOK_EVENT_FAILED],
    )
    print(job.job_id, job.status)  # e.g. "abc-123", JobStatus.pending

    # 2a. Either poll manually...
    snapshot = client.jobs.get_status(job.job_id)
    if snapshot.status == JobStatus.succeeded:
        download = client.jobs.download_results(job.job_id)

    # 2b. ...or block on a convenience wrapper (returns when terminal)
    completed = client.jobs.wait_for_completion(job.job_id, timeout=300)

    # 2c. ...or skip polling entirely and react to the webhook delivery instead.

    # 3. Download results once the job has Succeeded
    download = client.jobs.download_results(completed.job_id)

JobStatus is a string enum returned by client.jobs.get_status() and exposed on the Job dataclass. The five values are:

FieldTypeDescription
pendingstrJob has been accepted by the API and is queued for execution.
runningstrJob is currently being processed by the inference backend.
succeededstrTerminal — results are ready to download via client.jobs.download_results().
failedstrTerminal — the job did not produce a result; inspect job.error for details.
unknownstrStatus string was not recognised (forward-compat fallback). Treat as non-terminal.

Async area runs with run_area

client.run_area() is the async counterpart to run_area_and_wait(). It performs the same tiling, building assignment, and submission, but returns an AreaSchedule describing the in-flight jobs without blocking on completion.

from infrared_sdk import InfraredClient, WEBHOOK_EVENT_SUCCEEDED, WEBHOOK_EVENT_FAILED

with InfraredClient() as client:
    area = client.buildings.get_area(polygon)

    schedule = client.run_area(
        payload,
        polygon,
        buildings=area.buildings,
        webhook_url="https://your-server.com/webhooks",
        webhook_events=[WEBHOOK_EVENT_SUCCEEDED, WEBHOOK_EVENT_FAILED],
    )

    print(f"Submitted {len(schedule.jobs)} tile jobs ({len(schedule.failed_submissions)} submission errors)")

    # ... your webhook receiver records each job.succeeded / job.failed event ...

    # Once all jobs are terminal, download and merge into a single AreaResult
    result = client.merge_area_jobs(schedule)

AreaSchedule schema:

FieldTypeDescription
jobsdict[str, str]Mapping of tile_id to the submitted job_id.
polygondictThe source GeoJSON polygon (used by merge_area_jobs to clip).
analysis_typestrWhich analysis type was submitted.
failed_submissionstuple[str, ...]Tile IDs whose submission HTTP call failed; passed to retry_from=.
webhook_urlstr or NoneWebhook URL the schedule was submitted with (preserved for retries).
webhook_eventstuple[str, ...] or NoneWebhook events the schedule subscribed to.

run_area accepts a list of payloads (multi-analysis or parameter sweeps) and returns list[AreaSchedule] — one per payload — sharing a single thread pool. See Webhooks with multi-payload batches above for the burst-rate caveat that applies when these schedules deliver events to the same endpoint.

Persistence and retry. AreaSchedule.to_dict() and AreaSchedule.from_dict() round-trip JSON-safely, so a schedule can be persisted (database, file, queue message) between submission and merge. To retry only the tiles whose submission failed, pass the original schedule back via retry_from=: client.run_area(payload, polygon, retry_from=prior_schedule) resubmits just prior_schedule.failed_submissions and prior_schedule.merge(retry_schedule) produces a single combined schedule.

Manual polling. When a webhook endpoint is not available, client.check_area_state(schedule) queries every job status in parallel and returns an AreaState (counts of pending/running/succeeded/failed and an is_complete flag) suitable for a polling loop.

Submission retries. Tile submissions retry HTTP 429 / 5xx with exponential backoff and jitter (max_retries=2); see the note under Area API → Other details.

run_area_and_wait also accepts webhook_url=. Passing webhook_url= (and optionally webhook_events=) to run_area_and_wait() does not change its blocking behaviour — the call still returns the merged AreaResult locally — but it also asks the API to deliver per-job lifecycle events to your endpoint. Use this when you want the convenience of a synchronous result inside a script while still streaming job-level signals into a backend (queue, database, monitoring).

Webhooks

Webhook endpoints are the back-channel the API uses to signal job lifecycle changes. Endpoints are registered once per environment via client.webhooks.*; per-job subscriptions are attached at submit time using webhook_url= / webhook_events=.

from infrared_sdk import InfraredClient
from infrared_sdk import WEBHOOK_EVENT_SUCCEEDED, WEBHOOK_EVENT_FAILED

with InfraredClient() as client:
    # Register an endpoint
    endpoint = client.webhooks.register(
        url="https://your-server.com/webhooks",
        type="production",
    )
    print(f"Endpoint ID: {endpoint.id}")

    # List all registered endpoints
    endpoints = client.webhooks.list()

    # Delete an endpoint when no longer needed
    client.webhooks.delete(endpoint.id)

The type argument selects the server-side environment (and signing-secret pair) that an endpoint is bound to: "production" for production traffic, "development" for development / staging traffic. The SDK forwards type to the API verbatim; it does not change client behaviour.

The signing secret for a registered endpoint is available in your account dashboard at app.infrared.city after registration. Treat the secret like an API key: store it server-side and pass it directly to verify_signature().

Webhook events. When you submit a job, pass the events you want delivered as the raw event-name strings on the wire:

  • job.running — job has started processing.
  • job.succeeded — job completed successfully (results available via client.jobs.download_results).
  • job.failed — job failed (the event payload includes the error reason).

In Python code, prefer the SDK constants WEBHOOK_EVENT_RUNNING, WEBHOOK_EVENT_SUCCEEDED, WEBHOOK_EVENT_FAILED (re-exported from infrared_sdk) instead of typing the strings.

Signature verification. Every delivery is signed with the Standard Webhooks v1 HMAC-SHA256 scheme. The webhook-id, webhook-timestamp, and webhook-signature headers carry the message id, signing timestamp, and HMAC respectively. tolerance (default 300 s) bounds how old a timestamp may be before the call is rejected as a replay. Always verify against the raw request body bytes — verifying against parsed JSON or a re-encoded string changes byte-level whitespace and breaks the HMAC. This is the most common cause of webhook verification failures. verify_signature() accepts secrets with the whsec_ prefix as stored in the dashboard; the prefix is stripped internally before HMAC computation.

from infrared_sdk import WebhooksServiceClient

is_valid = WebhooksServiceClient.verify_signature(
    payload_body=request_body,    # raw bytes from the HTTP request body
    headers=request_headers,
    secret="whsec_...",            # signing secret from the dashboard
    tolerance=300,                 # seconds — replay-attack window
)

Pre-flight Diagnostics

At low sun angles, building shadows can extend beyond the per-tile geometry buffer and silently lose context near tile edges. The pre-flight check (estimate_sun_context_loss) flags those configurations before you run.

Error Handling

Payload validation: The SDK validates all payloads at construction time using Pydantic. Invalid inputs raise ValidationError immediately:

from pydantic import ValidationError

try:
    payload = WindModelRequest(
        analysis_type=AnalysesName.wind_speed,
        wind_speed=200,  # exceeds max of 100
        wind_direction=180,
    )
except ValidationError as e:
    print(e)  # field validation errors

HTTP errors: The SDK automatically retries HTTP 429 (rate-limited) and 5xx (server error) responses with exponential backoff and jitter. Non-retryable errors (401, 403) raise immediately.

Job-level errors: All job exceptions inherit from InfraredJobError:

ExceptionWhen
JobSubmitErrorJob submission failed
JobPollErrorError while polling status
JobFailedErrorJob completed with failed status
JobTimeoutErrorPolling timed out
ResultsDownloadErrorFailed to download results

Area-level errors: raised by the area orchestration path (run_area_and_wait, merge_area_jobs, buildings.get_area). They do not inherit from InfraredJobError — catch them separately:

ExceptionWhen
AreaRunErrorEvery job in the run failed (server error, download permanently failed, or per-tile grid rejected). Carries failed_jobs, skipped_jobs, total_jobs.
AreaTimeoutErrorrun_area_and_wait exceeded its area_timeout. Carries the live area_state snapshot so callers can decide whether to keep polling.
TiledRunErrorclient.buildings.get_area(...) (or other tiled fetchers) had every tile fail after retries. Carries failed_tiles. Partial failures don't raise — inspect area.failed_tiles on the returned AreaBuildings instead.
from infrared_sdk import AreaRunError, run_area_and_wait  # AreaRunError importable from top-level

try:
    result = client.run_area_and_wait(payload, polygon, buildings=area.buildings)
except AreaRunError as exc:
    # Every job failed — log per-job state and either retry or fail loudly.
    print(f"All {exc.total_jobs} jobs failed: {exc.failed_jobs}, skipped: {exc.skipped_jobs}")
    raise

Cookbook and examples

Notebooks, agent skills (Claude Code / Cursor / Codex / Copilot / Windsurf), and runnable Python recipes live in Infrared-city/infrared-skills — start at cookbook/notebooks/. Each notebook is self-contained and ordered as a learning path:

NotebookTopic
00_quickstart.ipynbInstall, env, instantiate the client, run one analysis end-to-end
01_buildings.ipynbclient.buildings.get_area, DotBim mesh format, building heights
02_vegetation_and_ground.ipynbclient.vegetation, client.ground_materials, layer formats
03_weather_and_time_periods.ipynbWeather file lookup, filter_weather_data, TimePeriod semantics
04_tiling_and_area_api.ipynbpreview_area, rectangular vs. irregular polygons, tile geometry, AreaResult
05_analysis_types_tour.ipynbAll 8 analysis types with payload patterns and outputs
06_image_rendering.ipynbgen_grid_image, orientation, colormap caveats
07_async_and_webhooks.ipynbrun_area, check_area_state, merge_area_jobs, webhooks

License

Apache-2.0.