Salesprompter

Docs that map the app, CLI, and extension as one system.

Public documentation for the Salesprompter contracts, workflows, and runtime behavior. This deployment is generated directly from the docs source in the CLI repository.

Source repository

The CLI talks to BigQuery through the local bq CLI. It does not proxy warehouse queries through the Salesprompter app backend.

Prerequisites

  • bq installed locally
  • local Google Cloud auth configured
  • access to the datasets referenced by the CLI

Project selection

The CLI resolves the BigQuery project in this order:

  1. BQ_PROJECT_ID
  2. GOOGLE_CLOUD_PROJECT
  3. GCLOUD_PROJECT
  4. icpidentifier

If you do nothing, the CLI defaults to icpidentifier.

Tables and views currently referenced

The current implementation references these warehouse objects:

  • icpidentifier.SalesGPT.leadPool_new
  • icpidentifier.SalesPrompter.leadLists_raw
  • icpidentifier.SalesPrompter.leadLists_unique
  • icpidentifier.SalesPrompter.linkedinSearchExport_people_unique
  • icpidentifier.SalesPrompter.salesNavigatorSearchExport_companies_unique
  • icpidentifier.SalesPrompter.salesNavigatorSearchExport_companies_unique_enriched
  • icpidentifier.SalesPrompter.snse_containers_input
  • icpidentifier.SalesPrompter.linkedin_companies
  • icpidentifier.SalesPrompter.domainFinder_output

Treat these as current warehouse contracts for this repository.

Lead lookup field expectations

For leads:lookup:bq, the CLI expects fields that can map to:

  • company name
  • company domain
  • title
  • first name
  • last name
  • email
  • industry
  • company size
  • country
  • optional region

The command lets you override field names per run, which is useful when your warehouse schema differs from the defaults.

Normalization behavior

When raw rows are normalized into Lead objects:

  • companySize buckets are converted into integer employeeCount approximations
  • empty region values are derived from country
  • missing name fields fail normalization
  • the source is set to bigquery-leadpool

Domain-finder SQL

The domain-finder workflow also relies heavily on BigQuery:

  • backlog analysis
  • candidate fetch
  • existing-domain audits
  • repair SQL
  • writeback execution

Those commands are intentionally explicit about whether they are:

  • generating SQL only
  • executing SQL
  • writing audit artifacts

That split is part of the safety model.