Version 0.1 — draft for review. This framework is itself subject to audit.

Purpose

The Authenticity Audit exists because the word authenticity has been so heavily commodified in tourism marketing that it no longer means anything operationally. Every provider claims to be authentic. Every platform claims to support local communities. Every destination claims to value its character.

The audit replaces self-declaration with structural assessment. It asks not what does this operation say about itself, but how is this operation built, where does the money go, who has a voice, and what happens when things go wrong.

It is designed to apply to four kinds of entity:

If the framework cannot be applied honestly to the author’s own business, the framework is not honest. That is the test.


What this audit is, and is not

This audit is:

This audit is not:

The audit’s bias is explicit: pro-local-control, pro-direct-booking, pro-resident-voice, pro-cultural-sovereignty, pro-transparency, pro-long-term place stewardship. Operations that align with those values will score well. Operations whose structure works against those values will score poorly. The framework does not pretend neutrality on this.


Structure

The audit has two parts.

Part One: Hard Criteria. Seven binary, evidence-based questions. An operation must pass all seven to be eligible for any positive verdict. These are gatekeeping criteria — failing any one of them produces a Fail regardless of how the operation scores on the dimensions below.

Part Two: Scored Dimensions. Eight dimensions, each with three to five criteria, each criterion scored on a four-point scale (Strong / Adequate / Weak / Failing). Each dimension is then graded on the basis of its component criteria. The dimension grades, taken together, produce the final verdict.

This two-tier structure exists because some things are categorical (you either disclose your ownership or you don’t) and some things are matters of degree (how locally retained is your money flow). Collapsing both into a single numeric score invites gaming. Keeping them separate forces a defensible judgement.


Part One: Hard Criteria

An operation must pass all seven to be eligible for any verdict above Fail.

HC1. Ownership is not concealed

Test: A reasonable researcher can determine, within 30 minutes of public research, who ultimately owns and controls the operation.

Evidence: Public business registry filings, ownership disclosure on the operation’s own surfaces, named directors or shareholders.

Failure mode: Shell companies, opaque holding structures, or marketing surfaces that imply local ownership while the operation is owned externally.

HC2. No exclusivity clauses preventing direct bookings to providers

Applies to: Platforms, intermediaries, package operators, block-bookers.

Test: The platform does not require providers to route all bookings through the platform, does not penalise providers for accepting direct bookings, does not implement rate parity clauses or equivalent mechanisms that constrain a provider’s pricing on the provider’s own channels.

Evidence: The written agreement governing the platform-provider relationship.

Failure mode: Rate parity clauses, “best price guarantees” enforced against the provider, exclusivity periods, penalty mechanisms for direct competition.

HC3. Commercial model is publicly disclosed

Test: The operation publishes, in plain language, how it makes money — including commission percentages, fee structures, and any commercial relationships that shape what travellers see.

Evidence: A public page describing the commercial model, accessible without signup, written in plain language at the level a 16-year-old can understand.

Failure mode: Commercial model buried in terms and conditions, opaque to travellers, or presented in language designed to obscure rather than disclose.

HC4. Money flow is publicly disclosed

Test: The operation discloses, with reasonable specificity, where money lands first, what proportion is retained locally, what proportion leaves the local economy.

Evidence: A public statement of money flow, including the operation’s own non-commission outflows (payment processing, hosting, software, advertising).

Failure mode: Disclosing only the OTA-versus-direct framing while omitting the operation’s own non-local cost base. This is the criterion that prevents an operation from criticising others for what it does itself.

HC5. Cultural appropriation is not present

Test: The operation does not use Sámi imagery, names, dress, ritual, or symbols in marketing or product without explicit, documented permission from a Sámi-led body or individual with the authority to grant it.

Evidence: Where Sámi cultural elements are present in the operation, documented permission and attribution are publicly available.

Failure mode: Reindeer-and-shaman aesthetic marketing by non-Sámi operators, “Lappish” language that flattens distinct cultures, generic Arctic-mystic positioning that monetises cultural ambience without cultural relationship.

HC6. A written agreement governs commercial relationships

Test: Travellers and providers entering a commercial relationship with the operation do so under written terms that are accessible before they commit.

Evidence: A traveller-facing terms-of-service document. A provider-facing participation agreement. Both publicly available.

Failure mode: Verbal arrangements, “just trust us” partnerships, or agreements existing only after the relationship has begun.

HC7. A functioning right of reply exists

Test: Any individual or entity affected by the operation’s published claims, scores, or content can submit a correction and receive a substantive response within a stated timeframe.

Evidence: A public correction policy with a stated response time, a public log of corrections made.

Failure mode: No correction route, corrections made silently, or a correction policy that exists on paper but has never been used.


Part Two: Scored Dimensions

Each criterion is scored on the following scale:

A dimension is graded based on its component criteria, with the lowest-scoring criterion weighing more than the highest. A dimension with one Failing criterion cannot be graded Strong, regardless of its other scores.


Dimension 1: Ownership & Control

Where do decisions get made, and by whom?

1.1 Beneficial ownership location. Where do the people who ultimately profit from this operation live, pay tax, and raise their families? Higher score: locally resident, tax-domiciled in the municipality. Lower score: non-resident, foreign-domiciled, ultimate parent corporation outside Finland.

1.2 Operational decision-making location. Where are decisions about pricing, hiring, partnerships, and strategy actually made? Higher score: by people physically present in the destination. Lower score: by remote management, head office, or algorithmic system.

1.3 Time horizon and tenure. How long has this operation existed in this place, and how long does its model assume it will continue? Higher score: multi-decade local presence with succession planning. Lower score: short-tenure operations, opportunistic entry, or models predicated on extraction-and-exit.

1.4 Sophisticated failure criterion — local-fronted, externally-controlled. Operations that present a local face while being financially or operationally controlled externally score Failing on this dimension regardless of other criteria. This includes: locally registered subsidiaries of foreign chains, “owner-operator” branding for franchise locations, and storefront localism masking corporate ownership.


Dimension 2: Money Flow & Local Retention

Where does the money actually go?

2.1 First-landing of traveller payment. Where does traveller money first arrive — provider’s account, intermediary’s account, escrow? Higher score: directly to the provider’s local account. Lower score: held by intermediary, especially if held outside Finland.

2.2 Proportion retained locally on a typical transaction. Of every €100 a traveller spends through this operation, how much is paid to people, businesses, and supply chains physically present in the municipality? Higher score: 80%+. Lower score: under 50%.

2.3 Disclosure of non-commission outflows. Does the operation disclose its own non-local cost base — payment processing, hosting, software, advertising spend, professional services — with the same specificity it demands of competitors? Higher score: itemised public disclosure. Lower score: outflows acknowledged only when challenged.

2.4 Sophisticated failure criterion — selective disclosure. Operations that publish “money stays local” claims while omitting their own outflows score Failing on this dimension regardless of other criteria. This is the criterion that distinguishes operations doing the work from operations performing the rhetoric.


Dimension 3: Booking & Distribution Architecture

How do travellers reach providers, and what does the path cost?

3.1 Direct booking accessibility. Can a traveller book the operation’s services directly, without a platform intermediary, at no penalty? Higher score: prominent direct booking, equal or better pricing than intermediated channels. Lower score: direct booking obscured, harder to access, or priced punitively.

3.2 Commission and fee transparency. When an intermediated booking occurs, is the commission or fee structure disclosed to the traveller? Higher score: visible at point of booking. Lower score: invisible, or only inferable from the gap between intermediary and direct prices.

3.3 Provider distribution autonomy. Does the operation respect or constrain providers’ ability to distribute through other channels? Higher score: providers retain full control of their channel mix. Lower score: rate parity, exclusivity, or equivalent constraints.

3.4 Sophisticated failure criterion — pay-to-rank. Does the operation’s visibility, ranking, or recommendation logic correlate with commission tier, advertising spend, or other commercial relationships, in ways that diverge from quality signals? Higher score: ranking is editorially or algorithmically grounded in factors verifiable to travellers. Lower score: higher-paying providers receive better visibility regardless of provider quality. This is the OTA-mechanic at any scale.


Dimension 4: Experience Integrity

Is the experience particular to this place, or replicable anywhere?

4.1 Place-specificity. Is the experience genuinely distinctive to this destination, or is it a templated product applied here? Higher score: experience could not be replicated elsewhere without losing its meaning. Lower score: identical product offered in multiple destinations under different branding.

4.2 Group size and traveller-to-host ratio. Is the experience structured at a human scale — small groups, named hosts, capacity for relationship — or at industrial scale — coach-loads, scripts, processed throughput? Higher score: small-group, named-host. Lower score: mass-handling, anonymous, throughput-optimised.

4.3 Variability and responsiveness. Does the experience respond to weather, group composition, traveller interest, and the rhythms of the place — or does it run to script regardless of circumstance? Higher score: variable, host-led, responsive. Lower score: scripted, time-locked, unchanged across visits.

4.4 Cultural and ecological grounding. Does the experience meaningfully connect travellers to the cultural and ecological reality of the place, or does it offer the aesthetic of place without the substance? Higher score: substantive engagement with local life and land. Lower score: photogenic surface with no underlying engagement.

4.5 Sophisticated failure criterion — performed authenticity. Operations that explicitly market authenticity while running structurally industrialised products score Failing. The combination of authenticity-as-claim and standardisation-as-practice is more harmful than honest standardisation, because it pollutes the term itself.


Dimension 5: Place & Resident Impact

What does this operation do to the community it operates within?

5.1 Housing impact. Does the operation occupy housing stock that would otherwise be available to residents — full-time short-let conversions, staff accommodation displacement, speculative purchase for tourism use? Higher score: no displacement of resident housing. Lower score: material contribution to housing pressure.

5.2 Resident voice in operation. Are residents (not employees, not providers) given structured input into how the operation behaves — content, marketing, presence, partnerships? Higher score: documented resident advisory role with real influence. Lower score: residents are neither consulted nor able to raise concerns.

5.3 Resident objection mechanism. Is there a public, functioning route for a resident to object to specific itineraries, marketing language, partnerships, or behaviours, with a stated response process? Higher score: published mechanism with logged responses. Lower score: no mechanism, or a mechanism that exists nominally but has never produced an outcome.

5.4 Environmental and carrying-capacity awareness. Does the operation publicly recognise the environmental and social carrying capacity of the destination, and does its growth model respect it? Higher score: stated capacity limits, willingness to throttle demand. Lower score: growth-only model, no carrying-capacity language.

5.5 Sophisticated failure criterion — externalised costs. Operations whose business model depends on costs being absorbed by residents — congestion, infrastructure load, inflation of local goods, erosion of social fabric — without contribution to the costs they create score Failing. The test is structural, not rhetorical.


Dimension 6: Cultural Sovereignty

Whose culture is being sold, and on whose terms?

6.1 Sámi cultural relationship. Where Sámi cultural elements appear in marketing, product, or imagery, is there a documented, ongoing relationship with Sámi-led bodies that authorises and shapes that use? Higher score: formal partnership with named Sámi-led organisations, with content authorised by them. Lower score: Sámi imagery used decoratively or generically.

6.2 Finnish-language equivalence. Does the operation present in Finnish as well as in international languages, on substantially equivalent terms? Higher score: full Finnish equivalence on consumer surfaces. Lower score: Finnish content thinner, secondary, or absent.

6.3 Local terminology and knowledge. Does the operation use Finnish and Sámi terms correctly, in context, with explanation rather than as exotic decoration? Higher score: terminology used substantively, with attribution. Lower score: terminology used as marketing flourish without substance.

6.4 Sophisticated failure criterion — extractive cultural use. Operations that use cultural symbols, ritual, or imagery as marketing material while having no operational relationship with the communities those elements come from score Failing. The substantive question is not whether the operation looks culturally connected, but whether the cultures named have any standing to influence the operation.


Dimension 7: Transparency & Disclosure

Does the operation make itself legible to honest scrutiny?

7.1 Commercial model legibility. Beyond the pass/fail of HC3, how clearly does the operation present its commercial model to a non-expert traveller? Higher score: model explained on a dedicated page in plain language, with examples. Lower score: model technically disclosed but practically hidden.

7.2 Editorial-versus-commercial separation. Where the operation produces editorial content alongside commercial offerings, is the separation between the two clearly marked? Higher score: paid placements, partnerships, and commission relationships visibly disclosed at the point of presentation. Lower score: editorial and commercial blurred without disclosure.

7.3 AI and automation disclosure. Where AI, recommendation algorithms, or automated systems shape what travellers see, is this disclosed in language travellers can understand? Higher score: clear, specific disclosure of AI’s role. Lower score: AI use undisclosed or buried in technical language.

7.4 Sophisticated failure criterion — disclosure asymmetry. Operations that demand transparency from competitors while themselves disclosing only what is flattering score Failing. The audit is interested in operations that disclose what they would prefer to hide, not operations that disclose what helps them market.


Dimension 8: Accountability & Right of Reply

What happens when something goes wrong?

8.1 Substantive correction policy. Beyond HC7’s pass/fail, does the correction policy specify response times, escalation routes, and standards for what merits public correction? Higher score: published, specific, time-bound. Lower score: vague commitment without operational specificity.

8.2 Removal-for-cause criteria. Where the operation can remove or downgrade other parties (providers, partners, content), are the criteria for doing so codified and public? Higher score: written criteria, applied consistently, with appeal route. Lower score: admin-discretionary, unwritten, inconsistent.

8.3 External scrutiny. Is there a person, body, or process external to the operation with standing to challenge its decisions, scores, or claims? Higher score: named external scrutineer with public remit. Lower score: all assessment internal.

8.4 Stakeholder feedback loop. Is there a structured process by which affected parties — providers, residents, travellers, communities — can shape the operation’s standards and criteria over time? Higher score: documented feedback loop with logged changes. Lower score: no structured route, or feedback solicited only when convenient.

8.5 Sophisticated failure criterion — single-person bottleneck. Operations whose authenticity, verification, and accountability all depend on a single individual’s judgement, time, or attention score Failing. The criterion exists because solo-administration of trust signals is structurally fragile, and the audit is interested in operations whose standards survive their founder.


Verdicts

The verdict combines the hard-criteria status with the dimension grades.

Pass

A Pass operation is exemplary. It is structurally aligned with the long-term value of the destination and demonstrably so on public, verifiable evidence.

Pass with Notes

A Pass with Notes operation is broadly aligned but with specific weaknesses identified. The Notes are public and the operation is expected to address them on a stated timeline.

Conditional Fail

A Conditional Fail operation cannot be presented as aligned with the model. The operation may remain operating but should not be referenced as exemplary, recommended without caveats, or used as a positive reference point. The conditions are stated publicly and a re-audit timeline is agreed.

Fail

A Fail operation is structurally misaligned with the model. The audit’s verdict is published with evidence and reasoning. The operation is not recommended, partnered with, or held up as an example.


How scoring is applied

The audit is applied as follows:


Disputes and revision

Operations have right of reply on every published audit. Disputes are handled as follows:

The framework is revised annually. Each revision is dated, version-numbered, and accompanied by a public changelog.


Limitations

This framework cannot do everything. Specifically:


What this framework is not yet

The following are open and are flagged for v0.2:


Application: the author’s own operation

levifinland.com will be audited against this framework before it accepts its first paid direct booking. The audit will be published in full on future.levifinland.com, including:

If the audit produces a verdict of Conditional Fail or Fail, the operation does not launch in the form audited until the failures are resolved or the model changed. This is the published commitment.

If the operation finds itself rewriting the framework to make itself score better, the framework is being corrupted. Independent review of the framework itself, not just of operations against it, is part of how that corruption is prevented.


Author: Colin Harrison Status: v0.1, draft for review. The framework will be revised before v1.0. Contact: colin@levifinland.com Revision history: this is the first version.