Why existing tools miss the most important signal
Sentiment analysis reads words. Focus groups collect declared preference. Media training gives subjective feedback. EchoDepth measures the involuntary physiological signal that none of them can reach.
Sentiment analysis tools
Analyses words. Flags 'negative', 'positive' or 'neutral' language.
Cannot read tone, delivery, facial expression or vocal hesitation. A speaker can deliver 'we are confident in our outlook' with every stress marker of deception — sentiment tools give it a positive score.
Reads the person, not the text. 44 FACS Action Units from video. Vocal pattern analysis from audio. Structural hesitation from text transcripts. A complete signal, not a word count.
Text analysis tools are useful for volume monitoring. They cannot tell you whether an individual speaker is believed.
Focus groups and surveys
Asks people what they think or feel. Records self-reported responses.
Participants perform for the group. Social desirability bias distorts every response. What people say they feel and what they actually feel diverge — consistently and measurably.
Records involuntary physiological signals — the 44 muscle movements that cannot be consciously controlled. You get the real response, not the performed one.
Focus groups surface stated preferences. EchoDepth surfaces actual emotional engagement. The two often disagree — and the disagreement is the most valuable insight.
Media training and coaching
Coaches speakers on what to say, how to stand, and general presentation technique.
Qualitative feedback is subjective. There is no objective baseline and no measurable outcome. The coach's opinion is not auditable. Training outcomes cannot be proven.
Generates a pre-training and post-training Trust Score, Credibility Signal and Confidence Score. Improvement is quantified. The coaching outcome is auditable and reportable.
Media training and EchoDepth are complementary. Training provides the method; EchoDepth provides the measurement. Together they produce a provable outcome.
Polygraph (lie detection)
Measures skin conductance, blood pressure and respiration. Attempts to detect deception.
The National Academy of Sciences found no scientific consensus on polygraph accuracy. False negative rates up to 47% in controlled studies. Inadmissible in UK courts. Highly invasive.
Measures facial Action Units and vocal patterns using the FACS standard — the most scientifically validated emotional measurement framework available. Non-invasive. No contact. Scientifically defensible.
EchoDepth does not claim to detect lies. It measures the involuntary physiological markers that accompany stress, cognitive load and emotional state change — and produces a timestamped, auditable output.
Human observers / analysts
Trained observers watch recordings and make qualitative judgements about behaviour.
Observer judgement is inconsistent, non-reproducible and uncalibrated for cultural variation. No two observers produce the same result. No audit trail. Cannot scale.
44 Action Units per frame. Same measurement every time. Calibrated across 14 cultural cohorts. Fully auditable output. Scales to any volume without degradation in consistency.
EchoDepth augments human observers — it provides the baseline data that makes human judgement more consistent and defensible.
What makes EchoDepth different
The argument competitors cannot make
Most communication analysis tools compete on capability — accuracy, speed, integrations. EchoDepth competes on something different: the ability to produce evidence that holds up in a regulated context.
Produce useful outputs. Cannot produce outputs that are auditable, reproducible, consent-documented and methodology-attributed in the way FCA, legal and investor governance contexts require.
Can detect and describe communication signals. Cannot produce a named methodology output with cultural calibration, benchmark comparison and a signed audit trail. Cannot tell you whether a score of 67 is good or bad for your sector.
Validated scoring methodology. 14 cultural cohorts. Benchmark comparison data. ICO registered, consent-documented, DPA standard. Timestamped, reproducible outputs structured for audit. The output carries methodological authority — not just analysis.
Can't a large language model just do this? It's a fair question. The answer is that capability is not the differentiator — evidence is. An LLM saying a speaker "seemed uncertain" is not the same as a signed Trust Score of 67 produced under a documented methodology that can be shown to have improved to 81 after targeted coaching. The difference is what you can defend.