These case studies exist in two versions. The standard version is written as narrative — what we found, how we found it, what the numbers showed. The methodology version is written for attorneys and forensic reviewers — evidentiary structure, classification framework, chain of custody language. Same key as Services.
Four engagements. Two service lines. Every client identity protected. Every number verified and published. Out of respect for clients whose cases involve ongoing renegotiation or potential proceedings, we protect identities. What we publish are the numbers. The numbers don't need names.
The simulated cases are marked clearly. They are built from real engagement structures and real methodology — illustrating how the pipeline performs on case types we handle. Nothing in them is invented. The patterns are real. The names and industries are illustrative.
The following case records are summarized for forensic review purposes. Each engagement was executed using OakMorel's document intelligence pipeline — computer vision extraction, field-level confidence scoring, human verification on all flagged items, and a final record structured to meet evidentiary standards in renegotiation, mediation, arbitration, and litigation. All findings are traceable to source documents. All source documents are retained on file. Client identities are protected under NDA unless explicitly released.
A regional contractor operating across fourteen active job sites commissioned an audit after noticing that costs were running consistently higher than quoted. The feeling was there before the proof was. They knew something was wrong. They couldn't point to it individually — the invoices were too long, the job pace too fast, the line items too numerous to verify one by one. That is exactly the condition the morel requires to survive.
We pulled every quote and every invoice on file across all fourteen jobs — seven active, seven expired — and ran them through the extraction pipeline. Three findings came back immediately. First: quote coverage. Of 1,274 invoiced line items, 954 had no corresponding quoted price. The supplier had been setting prices unilaterally on 74.9% of everything they delivered. Second: direct overcharges. 39 items that did have a quoted price were billed above the agreed amount — a combined $492.24 in documented overcharges on items where the client had every right to hold the supplier accountable. Third — and this is the morel — the substitution pattern.
The substitution pattern was the core finding. Items quoted at Schedule 40 PVC — a standard grade — arrived invoiced as Schedule 80 PVC, a pressure-rated grade requiring a different SKU and carrying a dramatically different price point. The description on the invoice read almost word-for-word identical to the quote. A human reviewer scanning the invoice would see the same product. The pipeline saw a different SKU, a different grade, and a price multiplier that in one documented case reached 29.23× the quoted price — $0.87 quoted, $25.31 billed, on a line item ordered ten units at a time.
What made this finding structurally significant was the consistency. The same substitution logic appeared across eight of the fourteen job sites. Different jobs, different delivery dates, different project managers on site — same pattern. That is not a fulfillment error. A fulfillment error appears in one job, gets corrected, and disappears. A pattern that replicates across eight sites and 153 instances is either a system or a policy. The distinction matters enormously in what comes next.
A bar and grill in Southern California was struck by a coordinated wave of 1-star reviews on a major review platform following a social media incident on a high-visibility public event night. Fourteen reviewing accounts posted within 48 hours. The reviews shared template language, sentence structure, and in several cases the same rare vocabulary — distributed across accounts that had no prior relationship to the business and in some cases no geographic proximity to it.
We built a forensic analysis pipeline from the ground up — entity profiling, linguistic fingerprinting, behavioral timeline analysis, engagement anomaly detection, and cross-business network mapping. What we found was not a mob. What we found was infrastructure.
The linguistic fingerprint was the central finding. The word "ambiance" — formal, specific, atypical for the venue type — appeared across multiple attack reviews on the primary business on the attack date, across a collateral business attacked the following day, and in a review posted fifteen months earlier by a dormant account that had been inactive since its creation in 2019. That account posted once, went silent, and the template word it planted appeared in the coordinated attack over a year later.
One reviewer wrote explicitly that he had heard about the incident — not witnessed it. His review was based on secondhand information, which the platform's own guidelines explicitly prohibit. But the specific word that would have triggered automated content moderation was corrupted — a single character changed — leaving the admission readable to a human reviewer while potentially evading keyword-based detection. That is not a clumsy typo. That is an engineered countermeasure.
The platform detected the coordinated attack in real time. Their own unusual activity alert system flagged three of the four affected businesses and suspended new comments. The attack reviews were left standing on all four. The affected businesses' star ratings remained damaged. Several subsequently purchased platform advertising to recover their visibility. The platform collected that revenue.
A healthcare practice group operating across multiple locations had a supply relationship spanning three years with a single distributor. The relationship looked clean on the surface. Invoices arrived on time. Products arrived as described. The pricing felt slightly elevated but the practice manager attributed it to post-pandemic supply chain adjustments and didn't pursue it. The feeling was there. The instrument wasn't.
When we ran the extraction pipeline across three years of invoices and quotes, the scope drift came back at 68% — nearly seven out of every ten invoiced items had no agreed reference price. Standard for the industry, the distributor would later argue. What was not standard was what we found inside the matched items — the 32% of line items where a quoted price did exist.
The substitution pattern in this engagement was different from Case 01. It wasn't grade upgrades on commodity fittings. It was consumable instruments — items quoted as generic and invoiced as name-brand equivalents — carrying the same catalog description, the same SKU prefix, a different suffix, and a price differential that ran consistently between 4× and 11× the quoted amount. Across three years. Across every location. With a consistency that no manual error could produce.
The finding that changed the scope of this engagement was not the substitution pattern itself. It was the multiplier consistency. Random fulfillment errors produce random price differentials. What we documented was a price multiplier that held within a narrow range — 4× to 11× — across three years, across every practice location, across every category of consumable affected. That kind of consistency does not come from human decisions. It comes from a pricing rule.
A pricing rule embedded in the distributor's order management system — mapping generic SKUs to name-brand equivalents at a fixed markup ratio on fulfillment — would produce exactly the pattern we documented. The distributor's own fulfillment staff would have no visibility into it. The billing department would see correct invoices by their own system's standard. The practice would receive the name-brand product and pay the name-brand price, having agreed only to the generic price. The delta would be invisible unless someone held every invoice against every quote simultaneously across the full three-year history.
When the forensic record was delivered to counsel, the scope of the analysis expanded. The distributor operates across multiple states. The pricing configuration, if embedded at the system level, would apply uniformly to any client purchasing the same SKU categories under the same contract structure. The findings were shared with counsel. The matter expanded beyond a single client engagement and now involves multiple jurisdictions. The case is protected. We say no more.
A local business went viral on social media following an ambiguous incident that generated divided public opinion. Within 72 hours, coordinated 1-star reviews appeared simultaneously on two major review platforms. This engagement was different from Case 02 in one critical way: the attacking accounts were not obvious.
The accounts that hit this client had years of platform history. They had profile photos. They had friends. They reviewed restaurants, coffee shops, service businesses — specific places, specific dishes, specific experiences. Their writing was warm, personal, and consistent across years of activity. By every signal available to a platform's automated detection systems, these were real people. The platform did not flag the attack. No unusual activity alert was issued. The reviews stayed up and the star ratings dropped on both platforms simultaneously.
We built the entity profiles anyway. And inside the review history of several of the attacking accounts — buried in years of otherwise authentic-looking activity — we found 5-star reviews of the client's direct competitor. Posted months before the attack. By the same accounts. That is the morel.
The sophistication of this network forces a question that Case 02 only raised as hypothesis. An account with three years of authentic-looking platform history — food photos, named dishes, consistent voice — is not built overnight for a single attack. It is either a real person who was socially activated and directed to attack, or it is a synthetic account trained on real reviewer behavior and maintained over time specifically to survive authenticity detection when deployed.
In either scenario the competitive link changes the nature of what the record documents. A mob attack triggered by a viral event is damaging but legally ambiguous. A coordinated attack by accounts that demonstrably reviewed the victim's competitor immediately prior to the attack is something different — it establishes a chain of interest that points toward a specific beneficiary. We documented the chain. We do not name the beneficiary. That determination belongs to counsel.
The cross-platform coordination was also significant. When an attack hits one platform, a platform-specific explanation is plausible. When the same accounts hit two platforms simultaneously, following the same trigger event, using consistent template language across both — the explanation narrows considerably. The forensic record was structured to document both platform records simultaneously, with the cross-platform account correlation mapped and sourced.
Every case above started the same way. A feeling. Something that didn't add up. A number that felt wrong, a relationship that felt off, an attack that felt coordinated but couldn't be proved. The feeling was right in every case. Just not always in the way the client expected.
If you have something — tell us. First conversation is free. We'll tell you honestly whether there's a morel worth finding.
Initial case assessment at no charge. We review the document set, assess whether volume and format support a forensic record of evidentiary value, and advise accordingly. Engagements we cannot execute to standard are engagements we decline. Every engagement that proceeds operates under NDA from document one.