The CLI talks to BigQuery through the local bq CLI. It does not proxy warehouse queries through the Salesprompter app backend.
Prerequisites
bqinstalled locally- local Google Cloud auth configured
- access to the datasets referenced by the CLI
Project selection
The CLI resolves the BigQuery project in this order:
BQ_PROJECT_IDGOOGLE_CLOUD_PROJECTGCLOUD_PROJECTicpidentifier
If you do nothing, the CLI defaults to icpidentifier.
Tables and views currently referenced
The current implementation references these warehouse objects:
icpidentifier.SalesGPT.leadPool_newicpidentifier.SalesPrompter.leadLists_rawicpidentifier.SalesPrompter.leadLists_uniqueicpidentifier.SalesPrompter.linkedinSearchExport_people_uniqueicpidentifier.SalesPrompter.salesNavigatorSearchExport_companies_uniqueicpidentifier.SalesPrompter.salesNavigatorSearchExport_companies_unique_enrichedicpidentifier.SalesPrompter.snse_containers_inputicpidentifier.SalesPrompter.linkedin_companiesicpidentifier.SalesPrompter.domainFinder_output
Treat these as current warehouse contracts for this repository.
Lead lookup field expectations
For leads:lookup:bq, the CLI expects fields that can map to:
- company name
- company domain
- title
- first name
- last name
- industry
- company size
- country
- optional region
The command lets you override field names per run, which is useful when your warehouse schema differs from the defaults.
Normalization behavior
When raw rows are normalized into Lead objects:
companySizebuckets are converted into integeremployeeCountapproximations- empty region values are derived from country
- missing name fields fail normalization
- the source is set to
bigquery-leadpool
Domain-finder SQL
The domain-finder workflow also relies heavily on BigQuery:
- backlog analysis
- candidate fetch
- existing-domain audits
- repair SQL
- writeback execution
Those commands are intentionally explicit about whether they are:
- generating SQL only
- executing SQL
- writing audit artifacts
That split is part of the safety model.