Metadata Operations¶
Operations for scanning, generating, and managing metadata.
Bases: StashClientProtocol
Mixin for metadata and database operation methods.
Functions¶
metadata_generate
async
¶
metadata_generate(
options: GenerateMetadataOptions
| dict[str, Any]
| None = None,
input_data: GenerateMetadataInput
| dict[str, Any]
| None = None,
) -> str
Generate metadata.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
options
|
GenerateMetadataOptions | dict[str, Any] | None
|
GenerateMetadataOptions object or dictionary of what to generate: - covers: bool - Generate covers - sprites: bool - Generate sprites - previews: bool - Generate previews - imagePreviews: bool - Generate image previews - previewOptions: GeneratePreviewOptionsInput - previewSegments: int - Number of segments in a preview file - previewSegmentDuration: float - Duration of each segment in seconds - previewExcludeStart: str - Duration to exclude from start - previewExcludeEnd: str - Duration to exclude from end - previewPreset: PreviewPreset - Preset when generating preview - markers: bool - Generate markers - markerImagePreviews: bool - Generate marker image previews - markerScreenshots: bool - Generate marker screenshots - transcodes: bool - Generate transcodes - forceTranscodes: bool - Generate transcodes even if not required - phashes: bool - Generate phashes - interactiveHeatmapsSpeeds: bool - Generate interactive heatmaps speeds - imageThumbnails: bool - Generate image thumbnails - clipPreviews: bool - Generate clip previews |
None
|
input_data
|
GenerateMetadataInput | dict[str, Any] | None
|
Optional GenerateMetadataInput object or dictionary to specify what to process: - sceneIDs: list[str] - List of scene IDs to generate for (default: all) - markerIDs: list[str] - List of marker IDs to generate for (default: all) - imageIDs: list[str] - List of image IDs to generate for (appSchema >= 84) - galleryIDs: list[str] - List of gallery IDs to generate for (appSchema >= 84) - paths: list[str] - Paths to run generate on, in addition to the ID lists - overwrite: bool - Overwrite existing media (default: False) |
None
|
Returns:
| Type | Description |
|---|---|
str
|
Job ID for the generation task |
Raises:
| Type | Description |
|---|---|
ValueError
|
If the input data is invalid |
TransportError
|
If the request fails |
metadata_scan
async
¶
Start a metadata scan job.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
paths
|
list[str] | None
|
List of paths to scan (None = all paths) |
None
|
flags
|
dict[str, Any] | None
|
Dict of scan flags to override defaults (rescan, scanGenerateCovers, etc.) |
None
|
Returns:
| Type | Description |
|---|---|
str
|
Job ID for the scan operation |
get_configuration_defaults
async
¶
Get default configuration settings.
metadata_clean
async
¶
Clean metadata and remove orphaned database entries.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_data
|
CleanMetadataInput | dict[str, Any]
|
CleanMetadataInput object or dictionary containing: - paths: List of paths to clean (optional) - dry_run: Whether to perform a dry run (optional, default: False) |
required |
Returns:
| Type | Description |
|---|---|
str
|
Job ID for the clean operation |
Examples:
Clean all metadata:
Dry run to see what would be cleaned:
Clean specific paths:
metadata_clean_generated
async
¶
Clean generated files (sprites, previews, screenshots, etc.).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_data
|
CleanGeneratedInput | dict[str, Any]
|
CleanGeneratedInput object or dictionary containing: - blobFiles: Clean blob files (optional) - dryRun: Whether to perform a dry run (optional, default: False) - imageThumbnails: Clean image thumbnails (optional) - markers: Clean marker files (optional) - screenshots: Clean screenshot files (optional) - sprites: Clean sprite files (optional) - transcodes: Clean transcode files (optional) |
required |
Returns:
| Type | Description |
|---|---|
str
|
Job ID for the clean operation |
Examples:
Clean all generated files:
job_id = await client.metadata_clean_generated({
"blobFiles": True,
"imageThumbnails": True,
"markers": True,
"screenshots": True,
"sprites": True,
"transcodes": True,
"dryRun": False
})
Dry run to see what would be cleaned:
Clean only specific types:
metadata_auto_tag
async
¶
metadata_auto_tag(
input_data: AutoTagMetadataInput | dict[str, Any],
) -> str
Start auto-tagging metadata task.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_data
|
AutoTagMetadataInput | dict[str, Any]
|
AutoTagMetadataInput object or dictionary containing: - paths: List of paths to tag, None for all files (optional) - performers: List of performer IDs to tag with, or [""] for all (optional) - studios: List of studio IDs to tag with, or [""] for all (optional) - tags: List of tag IDs to tag with, or ["*"] for all (optional) |
required |
Returns:
| Type | Description |
|---|---|
str
|
Job ID for the auto-tagging task |
Examples:
Auto-tag all files with all performers:
job_id = await client.metadata_auto_tag({
"performers": ["*"]
})
print(f"Auto-tag job started: {job_id}")
Auto-tag specific paths with specific performers:
metadata_identify
async
¶
Start metadata identification task using scrapers.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_data
|
IdentifyMetadataInput | dict[str, Any]
|
IdentifyMetadataInput object or dictionary containing: - sources: List of scraper sources to use (required) - options: Identification options (optional) - sceneIDs: List of scene IDs to identify (optional) - paths: List of scene paths to identify (optional, ignored if sceneIDs set) |
required |
Returns:
| Type | Description |
|---|---|
str
|
Job ID for the identification task |
Examples:
Identify scenes using a stash-box endpoint:
job_id = await client.metadata_identify({
"sources": [
{
"source": {
"stashBoxEndpoint": "https://stashdb.org/graphql"
}
}
],
"sceneIDs": ["1", "2", "3"]
})
print(f"Identify job started: {job_id}")
Identify with custom options:
from stash_graphql_client.types import (
IdentifyMetadataInput,
IdentifySourceInput,
ScraperSourceInput,
IdentifyMetadataOptionsInput
)
input_data = IdentifyMetadataInput(
sources=[
IdentifySourceInput(
source=ScraperSourceInput(
stashBoxEndpoint="https://stashdb.org/graphql"
),
options=IdentifyMetadataOptionsInput(
setCoverImage=True,
includeMalePerformers=False
)
)
],
paths=["/path/to/scenes"]
)
job_id = await client.metadata_identify(input_data)
metadata_import
async
¶
Start full metadata import from metadata directory.
This operation completely wipes the database and imports from the metadata directory. Use with caution.
Returns:
| Type | Description |
|---|---|
str
|
Job ID for the import task |
Examples:
Start full metadata import:
metadata_export
async
¶
export_objects
async
¶
Export objects to a downloadable file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_data
|
ExportObjectsInput | dict[str, Any]
|
ExportObjectsInput object or dictionary containing: - ids: List of object IDs to export (optional) - all: Export all objects (optional, default: False) - type: Object type to export (required) - format: Export format (optional) |
required |
Returns:
| Type | Description |
|---|---|
str
|
Download token for the exported file |
Examples:
Export all scenes:
token = await client.export_objects({
"all": True,
"type": "SCENE"
})
download_url = f"{client.url}/downloads/{token}"
Export specific performers:
import_objects
async
¶
Import objects from a file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_data
|
ImportObjectsInput | dict[str, Any]
|
ImportObjectsInput object or dictionary containing: - file: File to import from (required) - duplicateBehaviour: How to handle duplicates (optional) - missingRefBehaviour: How to handle missing references (optional) |
required |
Returns:
| Type | Description |
|---|---|
str
|
Import job ID |
Examples:
Import from file:
job_id = await client.import_objects({
"file": "/path/to/export.json"
})
print(f"Import job started: {job_id}")
Import with duplicate handling:
backup_database
async
¶
Create a database backup.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_data
|
BackupDatabaseInput | dict[str, Any]
|
BackupDatabaseInput object or dictionary containing: - download: Whether to download the backup (optional, default: True) |
required |
Returns:
| Type | Description |
|---|---|
str
|
Backup file path or download token |
Examples:
Create and download backup:
token = await client.backup_database({"download": True})
download_url = f"{client.url}/downloads/{token}"
Create backup without downloading:
anonymise_database
async
¶
Anonymise the database by removing identifying information.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_data
|
AnonymiseDatabaseInput | dict[str, Any]
|
AnonymiseDatabaseInput object or dictionary containing: - download: Whether to download the anonymised backup (optional, default: True) |
required |
Returns:
| Type | Description |
|---|---|
str
|
Anonymised backup file path or download token |
Examples:
Anonymise and download:
token = await client.anonymise_database({"download": True})
download_url = f"{client.url}/downloads/{token}"
Anonymise without downloading:
migrate
async
¶
Migrate database to the latest schema version.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_data
|
MigrateInput | dict[str, Any]
|
MigrateInput object or dictionary containing: - backupPath: Path to create backup before migration (required) |
required |
Returns:
| Type | Description |
|---|---|
str
|
Migration job ID |
Examples:
Migrate database with backup:
job_id = await client.migrate({"backupPath": "/path/to/backup.db"})
print(f"Migration job started: {job_id}")
Using the input type:
migrate_hash_naming
async
¶
migrate_scene_screenshots
async
¶
Migrate scene screenshots to the latest storage format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_data
|
MigrateSceneScreenshotsInput | dict[str, Any]
|
MigrateSceneScreenshotsInput object or dictionary containing: - deleteFiles: Delete old screenshot files (optional) - overwriteExisting: Overwrite existing screenshots (optional) |
required |
Returns:
| Type | Description |
|---|---|
str
|
Migration job ID |
Examples:
Migrate screenshots and delete old files:
job_id = await client.migrate_scene_screenshots({
"deleteFiles": True,
"overwriteExisting": False
})
print(f"Screenshot migration started: {job_id}")
Using the input type:
migrate_blobs
async
¶
Migrate blobs to the latest storage format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_data
|
MigrateBlobsInput | dict[str, Any]
|
MigrateBlobsInput object or dictionary containing: - deleteOld: Delete old blob files after migration (optional) |
required |
Returns:
| Type | Description |
|---|---|
str
|
Migration job ID |
Examples:
Migrate blobs and keep old files:
job_id = await client.migrate_blobs({"deleteOld": False})
print(f"Blob migration started: {job_id}")
Using the input type:
optimise_database
async
¶
Optimize the database.
Returns:
| Type | Description |
|---|---|
str
|
Job ID for the optimization task |
setup
async
¶
Run initial Stash setup.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_data
|
SetupInput | dict[str, Any]
|
SetupInput object or dictionary |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if successful |
download_ffmpeg
async
¶
Download FFmpeg binary.
Returns:
| Type | Description |
|---|---|
str
|
Job ID for the download task |