Set Variable Dimensions On Attribution
Enrich attribution results with variable and response curve dimension attributes using the Alviss AI API.
This tutorial covers downloading attribution data, fetching the associated variable metadata, and merging them together so that dimension attributes (such as product category, channel, or any custom dimensions) are available as columns on each attribution row. This is useful for filtering, grouping, and analysing attribution results by the dimensions defined on your variables and response curves. We'll walk through the process step by step using Python with the requests and pandas libraries to interact with the Alviss AI API. For each step, the relevant valid Python code snippet is provided.
Prerequisites
- You need a valid access token from the Alviss AI platform (see the Authentication section in the main API docs).
- Know your team ID, project ID, and the attribution ID you want to enrich.
- Install the required Python libraries if not already present:
pip install requests pandas.
Step 1: Import Necessary Libraries
Import requests for making HTTP API calls, io for handling byte streams, and pandas for data manipulation.
import requests
import io
import pandas as pdStep 2: Set Up Variables
Define the base API URL, your access token, team ID, project ID, and the attribution ID. Replace placeholders like <SET ME> with actual values.
url = "https://app.alviss.io/api/v1/api"
team_id = "<SET ME>"
project_id = "<SET ME>"
token = "<SET ME>"
attribution_id = 12Step 3: Prepare Authentication Headers
Create a headers dictionary with the Authorization Bearer token to authenticate all API requests.
headers = {"Authorization": "Bearer " + token}Step 4: Construct the Project URL
Build the base URL for the team and project-specific endpoints by formatting the team ID and project ID into the URL string.
team_project_url = url + f"/projects/{team_id}/{project_id}"Step 5: Download Attribution Data
Send a GET request to the /attributions/{attribution_id}/data endpoint. The response contains CSV-formatted attribution results which we parse into a DataFrame using pandas.
resp = requests.get(
team_project_url + f"/attributions/{attribution_id}/data",
headers=headers,
)
df = pd.read_csv(io.BytesIO(resp.content))Step 6: Fetch Variable Metadata
Send a GET request to the /attributions/{attribution_id}/variables endpoint. The response is a JSON array of variable definitions including their dimension attributes (util_attr) and linked response curve information (attribution_response).
resp = requests.get(
team_project_url + f"/attributions/{attribution_id}/variables",
headers=headers,
)
df_vars = pd.DataFrame(resp.json())Step 7: Reshape Variable Metadata
Select and rename the columns we need for the merge: the variable slug, its dimension attributes, the linked response curve slug, and the response curve dimension attributes.
df_vars = pd.DataFrame(
{
"Variable": df_vars["Slug"],
"util_attr": df_vars["util_attr"],
"Response": df_vars["attribution_response"].str["Slug"],
"response_util_attr": df_vars["attribution_response"].str["util_attr"],
}
)Step 8: Merge Attribution Data with Variable Metadata
Join the attribution data with the variable metadata on the Variable and Response columns. This adds the dimension attribute dictionaries to each attribution row.
merge_df = pd.merge(df, df_vars, on=["Variable", "Response"])Step 9: Expand Dimension Attributes into Columns
Use pd.json_normalize to flatten the dimension attribute dictionaries into individual columns. Variable dimensions are prefixed with Var_ and response curve dimensions with Resp_ to avoid name collisions. The original dictionary columns are dropped and replaced by the expanded ones.
var_attrs = pd.json_normalize(
merge_df["util_attr"].apply(lambda x: x or {})
).add_prefix("Var_")
resp_attrs = pd.json_normalize(
merge_df["response_util_attr"].apply(lambda x: x or {})
).add_prefix("Resp_")
merge_df = pd.concat(
[merge_df.drop(columns=["util_attr", "response_util_attr"]), var_attrs, resp_attrs],
axis=1,
)The resulting merge_df DataFrame now contains all original attribution columns plus one column per dimension attribute, ready for downstream analysis.
Full Example Code
import requests
import io
import pandas as pd
url = "https://app.alviss.io/api/v1/api"
team_id = "<SET ME>"
project_id = "<SET ME>"
token = "<SET ME>"
attribution_id = 12
headers = {"Authorization": "Bearer " + token}
team_project_url = url + f"/projects/{team_id}/{project_id}"
resp = requests.get(
team_project_url + f"/attributions/{attribution_id}/data",
headers=headers,
)
df = pd.read_csv(io.BytesIO(resp.content))
resp = requests.get(
team_project_url + f"/attributions/{attribution_id}/variables",
headers=headers,
)
df_vars = pd.DataFrame(resp.json())
df_vars = pd.DataFrame(
{
"Variable": df_vars["Slug"],
"util_attr": df_vars["util_attr"],
"Response": df_vars["attribution_response"].str["Slug"],
"response_util_attr": df_vars["attribution_response"].str["util_attr"],
}
)
merge_df = pd.merge(df, df_vars, on=["Variable", "Response"])
var_attrs = pd.json_normalize(
merge_df["util_attr"].apply(lambda x: x or {})
).add_prefix("Var_")
resp_attrs = pd.json_normalize(
merge_df["response_util_attr"].apply(lambda x: x or {})
).add_prefix("Resp_")
merge_df = pd.concat(
[merge_df.drop(columns=["util_attr", "response_util_attr"]), var_attrs, resp_attrs],
axis=1,
)