Edit, code, weight, tabulate and deliver. A DP unit that has processed Caribbean and LATAM research datasets since 1970 and treats clean data as the product not the afterthought.
Clean data is the product. A questionnaire and a field team are inputs. The dataset you hand a client team to analyse is the actual deliverable and it is where sloppy firms lose trust fastest. A mislabelled variable, a miscoded open-end, a tab that disagrees with the topline. That is how a project dies.
CMR runs a dedicated DP unit in Trinidad that has been processing market research datasets for over fifty years. The team handles edit checking, open-end coding, weighting, tabulation and delivery for every CMR project and for client projects where the fieldwork happened somewhere else.
The unit runs SPSS, Q, Excel and proprietary DP scripts. Every output is double-checked against the topline before it ships. Tab specs get agreed with the client before tabulation starts so the final book matches the analysis plan.








































Brand health, customer satisfaction, NPS. Wave-to-wave consistency is survival. Same codeframes, same weighting, same tab spec every time. Sloppy DP kills tracker trust fast.
Cluster analysis, factor-based segments, custom group variables. These need the DP team to build and apply segment flags across hundreds of crosstabs.
Qualitative-quant hybrids, driver studies, customer verbatims. A 500-response open-end coded on the fly loses meaning. A proper codeframe makes the data analysable.
One dataset from three territories in two languages. Harmonised variable labels, merged value labels, a tab book that reads the same whether the client sits in Kingston or Santo Domingo.
Raw dataset checked against the questionnaire. Record count matches field. Quota cells match the plan. Any inconsistency gets flagged before the clock starts.
Impossible combinations flagged. Routing violations corrected or removed. Speeder and straight-line checks on online. Open-ends cleaned of gibberish and bot responses.
Codeframes built from first 20 percent of open-ends, reviewed by the research director, applied to the rest. Variable labels and value labels written once for every wave in a tracker.
Weights built against agreed targets, applied, documented. Tabulation run to the agreed spec with significance testing on the cells the brief needs.
Every tab reconciled against the topline. Labelled SPSS plus Excel tab book plus any client-preferred format. Source files archived so wave-2 can be tabulated the exact same way.
Four core outputs. The files your team needs to run their own cuts and the audit trail that proves the data stands up.
SPSS SAV is standard. Excel tab book is standard. We also deliver CSV, tab-delimited, SAS and Stata on request. Source script is archived so any format can be regenerated months later.
Yes. We regularly clean, code and tabulate datasets collected by clients or by other fieldwork firms. Send the raw file and the questionnaire. We run an intake assessment and quote the processing work separately from fieldwork.
Codeframe built off the first fifth of responses, reviewed with the research director on the brief, then applied to the whole file. For high-volume studies we run two coders in parallel with a reliability check. For multi-wave trackers the codeframe is frozen after wave one so themes can be tracked over time.
Weighting is included in the base DP price for any study where the sample plan requires it. Bespoke weighting on unusual targets or with client-supplied population data is quoted line-item.
Variable labels harmonised across territories before merge. Value labels standardised. Weights applied at territory level then re-based for the pooled analysis. The final pooled dataset carries territory as a banner variable so any tab can be cut by market.