Conjoint, segmentation, driver analysis and predictive modeling. Advanced quantitative methods applied to Caribbean datasets by a team that understands the region as well as the math.
Most research questions end at the topline. Some don't. When the brief is what attribute matters most for choice, which customers group together, what really drives NPS or how will the market respond to a new price, a mean score is not the answer. A properly specified model is.
CMR runs the full modern toolkit. Choice-based conjoint for feature and price tradeoffs. Latent-class and k-means segmentation. Driver analysis with Shapley or relative weights. Logistic and linear regression for prediction. Importance ranking for attribute prioritisation. All run on clean datasets our own DP team has prepared.
The deliverable is not a printout of R output. It is a set of recommendations a commercial team can act on, backed by a technical appendix that stands up to audit by a data science team on the client side.








































Conjoint tells you what combination of features and price wins share. Van Westendorp tells you the acceptable price band. Neither is a mean score question.
Who are my customers really. What do the groups want. How big is each and how valuable. A segmentation that survives contact with the commercial team needs proper modeling.
What actually moves NPS, satisfaction or preference. Stated importance lies. Derived importance built from a regression or Shapley decomposition tells the true story.
Which customers will churn. Which prospects will convert. Logistic regression or gradient boosting on your dataset with a clean holdout gives you a model the retention team can use tomorrow.
Workshop the decision the analysis has to support. Lock the deliverable format before method selection. If the brief changes midway, the analysis design has to change with it.
Sample size and quota plan driven by the method. Attribute ranking needs enough respondents per item. Segmentation needs enough variance. Conjoint needs a design matrix. This gets locked before field starts.
Modeling on cleaned dataset. Multiple specifications compared for stability. Holdout validation on predictive work. Technical appendix documenting every decision.
The output is not R logs. The output is a set of decisions the business can act on. Every chart has a recommendation. Every recommendation has the math backing it.
Conjoint ships with an Excel market simulator. Segmentation ships with a typing tool. Drivers ship with a scorecard. The analysis lives on after the presentation ends.
Four outputs. A finding. A business narrative. A tool the team can keep using. And the technical audit trail for your data science team.
When you need to rank twenty attributes by importance and rating grids produce a wall of "very important" responses with no differentiation. Forced-choice ranking makes respondents trade off so every attribute gets a real score. Standard for feature prioritisation and packaging claim tests.
Yes. Sample sizes start at 300 per segment for stable estimates. Mobile-first conjoint design is critical because most completes come from phones. Market simulator ships in Excel so the commercial team can run scenarios without us.
Minimum 500 for a defensible segmentation, 1,000+ for multi-country work. The issue is not just the total. It is ensuring the smallest segment reaches 75 to 100 respondents so the profile holds up to scrutiny.
We bring the research design on top of the math. A data science team can fit a model. A research team connects the model to the business question, the sample design and the interpretation. Best results happen when our analytics work ships to your data team with full code and documentation so they can extend it in production.
Both. Every advanced analytics engagement ships with the cleaned dataset, model scores or segment flags appended as variables and a technical appendix. Your team can re-run and extend the analysis any time.