Skip to content

Metadata Operations

Operations for scanning, generating, and managing metadata.

Bases: StashClientProtocol

Mixin for metadata and database operation methods.

Functions

metadata_generate async

metadata_generate(
    options: GenerateMetadataOptions
    | dict[str, Any]
    | None = None,
    input_data: GenerateMetadataInput
    | dict[str, Any]
    | None = None,
) -> str

Generate metadata.

Parameters:

Name Type Description Default
options GenerateMetadataOptions | dict[str, Any] | None

GenerateMetadataOptions object or dictionary of what to generate: - covers: bool - Generate covers - sprites: bool - Generate sprites - previews: bool - Generate previews - imagePreviews: bool - Generate image previews - previewOptions: GeneratePreviewOptionsInput - previewSegments: int - Number of segments in a preview file - previewSegmentDuration: float - Duration of each segment in seconds - previewExcludeStart: str - Duration to exclude from start - previewExcludeEnd: str - Duration to exclude from end - previewPreset: PreviewPreset - Preset when generating preview - markers: bool - Generate markers - markerImagePreviews: bool - Generate marker image previews - markerScreenshots: bool - Generate marker screenshots - transcodes: bool - Generate transcodes - forceTranscodes: bool - Generate transcodes even if not required - phashes: bool - Generate phashes - interactiveHeatmapsSpeeds: bool - Generate interactive heatmaps speeds - imageThumbnails: bool - Generate image thumbnails - clipPreviews: bool - Generate clip previews

None
input_data GenerateMetadataInput | dict[str, Any] | None

Optional GenerateMetadataInput object or dictionary to specify what to process: - sceneIDs: list[str] - List of scene IDs to generate for (default: all) - markerIDs: list[str] - List of marker IDs to generate for (default: all) - imageIDs: list[str] - List of image IDs to generate for (appSchema >= 84) - galleryIDs: list[str] - List of gallery IDs to generate for (appSchema >= 84) - paths: list[str] - Paths to run generate on, in addition to the ID lists - overwrite: bool - Overwrite existing media (default: False)

None

Returns:

Type Description
str

Job ID for the generation task

Raises:

Type Description
ValueError

If the input data is invalid

TransportError

If the request fails

metadata_scan async

metadata_scan(
    paths: list[str] | None = None,
    flags: dict[str, Any] | None = None,
) -> str

Start a metadata scan job.

Parameters:

Name Type Description Default
paths list[str] | None

List of paths to scan (None = all paths)

None
flags dict[str, Any] | None

Dict of scan flags to override defaults (rescan, scanGenerateCovers, etc.)

None

Returns:

Type Description
str

Job ID for the scan operation

get_configuration_defaults async

get_configuration_defaults() -> ConfigDefaultSettingsResult

Get default configuration settings.

metadata_clean async

metadata_clean(
    input_data: CleanMetadataInput | dict[str, Any],
) -> str

Clean metadata and remove orphaned database entries.

Parameters:

Name Type Description Default
input_data CleanMetadataInput | dict[str, Any]

CleanMetadataInput object or dictionary containing: - paths: List of paths to clean (optional) - dry_run: Whether to perform a dry run (optional, default: False)

required

Returns:

Type Description
str

Job ID for the clean operation

Examples:

Clean all metadata:

job_id = await client.metadata_clean({"dry_run": False})
print(f"Clean job started: {job_id}")

Dry run to see what would be cleaned:

job_id = await client.metadata_clean({"dry_run": True})

Clean specific paths:

from stash_graphql_client.types import CleanMetadataInput

input_data = CleanMetadataInput(
    paths=["/path/to/clean"],
    dry_run=False
)
job_id = await client.metadata_clean(input_data)

metadata_clean_generated async

metadata_clean_generated(
    input_data: CleanGeneratedInput | dict[str, Any],
) -> str

Clean generated files (sprites, previews, screenshots, etc.).

Parameters:

Name Type Description Default
input_data CleanGeneratedInput | dict[str, Any]

CleanGeneratedInput object or dictionary containing: - blobFiles: Clean blob files (optional) - dryRun: Whether to perform a dry run (optional, default: False) - imageThumbnails: Clean image thumbnails (optional) - markers: Clean marker files (optional) - screenshots: Clean screenshot files (optional) - sprites: Clean sprite files (optional) - transcodes: Clean transcode files (optional)

required

Returns:

Type Description
str

Job ID for the clean operation

Examples:

Clean all generated files:

job_id = await client.metadata_clean_generated({
    "blobFiles": True,
    "imageThumbnails": True,
    "markers": True,
    "screenshots": True,
    "sprites": True,
    "transcodes": True,
    "dryRun": False
})

Dry run to see what would be cleaned:

job_id = await client.metadata_clean_generated({"dryRun": True})

Clean only specific types:

from stash_graphql_client.types import CleanGeneratedInput

input_data = CleanGeneratedInput(
    sprites=True,
    screenshots=True,
    dryRun=False
)
job_id = await client.metadata_clean_generated(input_data)

metadata_auto_tag async

metadata_auto_tag(
    input_data: AutoTagMetadataInput | dict[str, Any],
) -> str

Start auto-tagging metadata task.

Parameters:

Name Type Description Default
input_data AutoTagMetadataInput | dict[str, Any]

AutoTagMetadataInput object or dictionary containing: - paths: List of paths to tag, None for all files (optional) - performers: List of performer IDs to tag with, or [""] for all (optional) - studios: List of studio IDs to tag with, or [""] for all (optional) - tags: List of tag IDs to tag with, or ["*"] for all (optional)

required

Returns:

Type Description
str

Job ID for the auto-tagging task

Examples:

Auto-tag all files with all performers:

job_id = await client.metadata_auto_tag({
    "performers": ["*"]
})
print(f"Auto-tag job started: {job_id}")

Auto-tag specific paths with specific performers:

from stash_graphql_client.types import AutoTagMetadataInput

input_data = AutoTagMetadataInput(
    paths=["/path/to/videos"],
    performers=["1", "2", "3"],
    studios=["*"],
    tags=None
)
job_id = await client.metadata_auto_tag(input_data)

metadata_identify async

metadata_identify(
    input_data: IdentifyMetadataInput | dict[str, Any],
) -> str

Start metadata identification task using scrapers.

Parameters:

Name Type Description Default
input_data IdentifyMetadataInput | dict[str, Any]

IdentifyMetadataInput object or dictionary containing: - sources: List of scraper sources to use (required) - options: Identification options (optional) - sceneIDs: List of scene IDs to identify (optional) - paths: List of scene paths to identify (optional, ignored if sceneIDs set)

required

Returns:

Type Description
str

Job ID for the identification task

Examples:

Identify scenes using a stash-box endpoint:

job_id = await client.metadata_identify({
    "sources": [
        {
            "source": {
                "stashBoxEndpoint": "https://stashdb.org/graphql"
            }
        }
    ],
    "sceneIDs": ["1", "2", "3"]
})
print(f"Identify job started: {job_id}")

Identify with custom options:

from stash_graphql_client.types import (
    IdentifyMetadataInput,
    IdentifySourceInput,
    ScraperSourceInput,
    IdentifyMetadataOptionsInput
)

input_data = IdentifyMetadataInput(
    sources=[
        IdentifySourceInput(
            source=ScraperSourceInput(
                stashBoxEndpoint="https://stashdb.org/graphql"
            ),
            options=IdentifyMetadataOptionsInput(
                setCoverImage=True,
                includeMalePerformers=False
            )
        )
    ],
    paths=["/path/to/scenes"]
)
job_id = await client.metadata_identify(input_data)

metadata_import async

metadata_import() -> str

Start full metadata import from metadata directory.

This operation completely wipes the database and imports from the metadata directory. Use with caution.

Returns:

Type Description
str

Job ID for the import task

Examples:

Start full metadata import:

job_id = await client.metadata_import()
print(f"Import job started: {job_id}")

metadata_export async

metadata_export() -> str

Start full metadata export to metadata directory.

Exports the entire database to the configured metadata directory.

Returns:

Type Description
str

Job ID for the export task

Examples:

Start full metadata export:

job_id = await client.metadata_export()
print(f"Export job started: {job_id}")

export_objects async

export_objects(
    input_data: ExportObjectsInput | dict[str, Any],
) -> str

Export objects to a downloadable file.

Parameters:

Name Type Description Default
input_data ExportObjectsInput | dict[str, Any]

ExportObjectsInput object or dictionary containing: - ids: List of object IDs to export (optional) - all: Export all objects (optional, default: False) - type: Object type to export (required) - format: Export format (optional)

required

Returns:

Type Description
str

Download token for the exported file

Examples:

Export all scenes:

token = await client.export_objects({
    "all": True,
    "type": "SCENE"
})
download_url = f"{client.url}/downloads/{token}"

Export specific performers:

from stash_graphql_client.types import ExportObjectsInput

input_data = ExportObjectsInput(
    ids=["1", "2", "3"],
    type="PERFORMER"
)
token = await client.export_objects(input_data)

import_objects async

import_objects(
    input_data: ImportObjectsInput | dict[str, Any],
) -> str

Import objects from a file.

Parameters:

Name Type Description Default
input_data ImportObjectsInput | dict[str, Any]

ImportObjectsInput object or dictionary containing: - file: File to import from (required) - duplicateBehaviour: How to handle duplicates (optional) - missingRefBehaviour: How to handle missing references (optional)

required

Returns:

Type Description
str

Import job ID

Examples:

Import from file:

job_id = await client.import_objects({
    "file": "/path/to/export.json"
})
print(f"Import job started: {job_id}")

Import with duplicate handling:

from stash_graphql_client.types import ImportObjectsInput

input_data = ImportObjectsInput(
    file="/path/to/export.json",
    duplicateBehaviour="IGNORE"
)
job_id = await client.import_objects(input_data)

backup_database async

backup_database(
    input_data: BackupDatabaseInput | dict[str, Any],
) -> str

Create a database backup.

Parameters:

Name Type Description Default
input_data BackupDatabaseInput | dict[str, Any]

BackupDatabaseInput object or dictionary containing: - download: Whether to download the backup (optional, default: True)

required

Returns:

Type Description
str

Backup file path or download token

Examples:

Create and download backup:

token = await client.backup_database({"download": True})
download_url = f"{client.url}/downloads/{token}"

Create backup without downloading:

from stash_graphql_client.types import BackupDatabaseInput

input_data = BackupDatabaseInput(download=False)
path = await client.backup_database(input_data)
print(f"Backup created at: {path}")

anonymise_database async

anonymise_database(
    input_data: AnonymiseDatabaseInput | dict[str, Any],
) -> str

Anonymise the database by removing identifying information.

Parameters:

Name Type Description Default
input_data AnonymiseDatabaseInput | dict[str, Any]

AnonymiseDatabaseInput object or dictionary containing: - download: Whether to download the anonymised backup (optional, default: True)

required

Returns:

Type Description
str

Anonymised backup file path or download token

Examples:

Anonymise and download:

token = await client.anonymise_database({"download": True})
download_url = f"{client.url}/downloads/{token}"

Anonymise without downloading:

from stash_graphql_client.types import AnonymiseDatabaseInput

input_data = AnonymiseDatabaseInput(download=False)
path = await client.anonymise_database(input_data)
print(f"Anonymised backup created at: {path}")

migrate async

migrate(input_data: MigrateInput | dict[str, Any]) -> str

Migrate database to the latest schema version.

Parameters:

Name Type Description Default
input_data MigrateInput | dict[str, Any]

MigrateInput object or dictionary containing: - backupPath: Path to create backup before migration (required)

required

Returns:

Type Description
str

Migration job ID

Examples:

Migrate database with backup:

job_id = await client.migrate({"backupPath": "/path/to/backup.db"})
print(f"Migration job started: {job_id}")

Using the input type:

from stash_graphql_client.types import MigrateInput

input_data = MigrateInput(backupPath="/path/to/backup.db")
job_id = await client.migrate(input_data)

migrate_hash_naming async

migrate_hash_naming() -> str

Migrate hash naming scheme to the latest version.

Returns:

Type Description
str

Migration job ID

Examples:

Migrate hash naming:

job_id = await client.migrate_hash_naming()
print(f"Hash naming migration started: {job_id}")

migrate_scene_screenshots async

migrate_scene_screenshots(
    input_data: MigrateSceneScreenshotsInput
    | dict[str, Any],
) -> str

Migrate scene screenshots to the latest storage format.

Parameters:

Name Type Description Default
input_data MigrateSceneScreenshotsInput | dict[str, Any]

MigrateSceneScreenshotsInput object or dictionary containing: - deleteFiles: Delete old screenshot files (optional) - overwriteExisting: Overwrite existing screenshots (optional)

required

Returns:

Type Description
str

Migration job ID

Examples:

Migrate screenshots and delete old files:

job_id = await client.migrate_scene_screenshots({
    "deleteFiles": True,
    "overwriteExisting": False
})
print(f"Screenshot migration started: {job_id}")

Using the input type:

from stash_graphql_client.types import MigrateSceneScreenshotsInput

input_data = MigrateSceneScreenshotsInput(
    deleteFiles=True,
    overwriteExisting=True
)
job_id = await client.migrate_scene_screenshots(input_data)

migrate_blobs async

migrate_blobs(
    input_data: MigrateBlobsInput | dict[str, Any],
) -> str

Migrate blobs to the latest storage format.

Parameters:

Name Type Description Default
input_data MigrateBlobsInput | dict[str, Any]

MigrateBlobsInput object or dictionary containing: - deleteOld: Delete old blob files after migration (optional)

required

Returns:

Type Description
str

Migration job ID

Examples:

Migrate blobs and keep old files:

job_id = await client.migrate_blobs({"deleteOld": False})
print(f"Blob migration started: {job_id}")

Using the input type:

from stash_graphql_client.types import MigrateBlobsInput

input_data = MigrateBlobsInput(deleteOld=True)
job_id = await client.migrate_blobs(input_data)

optimise_database async

optimise_database() -> str

Optimize the database.

Returns:

Type Description
str

Job ID for the optimization task

setup async

setup(input_data: SetupInput | dict[str, Any]) -> bool

Run initial Stash setup.

Parameters:

Name Type Description Default
input_data SetupInput | dict[str, Any]

SetupInput object or dictionary

required

Returns:

Type Description
bool

True if successful

download_ffmpeg async

download_ffmpeg() -> str

Download FFmpeg binary.

Returns:

Type Description
str

Job ID for the download task