modality and uncertainty

definition

modality qualifies the truth of statements beyond simple true/false, expressing concepts like necessity, possibility, knowledge, and obligation. uncertainty quantifies our confidence in statements using probability, fuzzy values, or other measures.

together, modality and uncertainty enable reasoning systems to handle the nuanced truth conditions and degrees of belief that characterize real-world reasoning.

types of modality

alethic modality

concerns logical and metaphysical necessity/possibility - what must be true or could be true given the nature of reality.

necessity (□p)

statement pp is necessarily true - true in all possible worlds:

□(all bachelors are unmarried)
mathematical truths: □(2 + 2 = 4)
logical truths: □(p ∨ ¬p)

possibility (◇p)

statement pp is possibly true - true in at least one possible world:

◇(there exists life on mars)
◇(quantum computers solve np-complete problems efficiently)

contingency

statement is contingently true - true in the actual world but could be false:

(it's raining today) - true now, but could be false
(the president is giving a speech) - depends on circumstances

implementation in reasoning systems

from enum import Enum

class AlethicModality(Enum):
    NECESSARY = "necessarily"
    POSSIBLE = "possibly"
    IMPOSSIBLE = "impossible"
    CONTINGENT = "contingent"

class ModalStatement:
    def __init__(self, proposition, modality, confidence=1.0):
        self.proposition = proposition
        self.modality = modality
        self.confidence = confidence

    def is_consistent_with(self, other):
        # check modal consistency
        if self.modality == AlethicModality.NECESSARY:
            if other.modality == AlethicModality.IMPOSSIBLE:
                return not self.proposition_conflicts(other)

        if self.modality == AlethicModality.IMPOSSIBLE:
            if other.modality == AlethicModality.NECESSARY:
                return not self.proposition_conflicts(other)

        return True

class ModalReasoner:
    def __init__(self):
        self.modal_statements = []

    def add_modal_fact(self, proposition, modality):
        stmt = ModalStatement(proposition, modality)
        if self.is_consistent(stmt):
            self.modal_statements.append(stmt)
        else:
            raise InconsistencyError(f"Modal statement conflicts with existing knowledge")

    def is_necessarily_true(self, proposition):
        for stmt in self.modal_statements:
            if (stmt.proposition == proposition and
                stmt.modality == AlethicModality.NECESSARY):
                return True
        return False

    def derive_modal_consequences(self):
        new_statements = []

        # modal axioms: □p → p (necessitation implies truth)
        for stmt in self.modal_statements:
            if stmt.modality == AlethicModality.NECESSARY:
                truth_stmt = ModalStatement(stmt.proposition, None)
                new_statements.append(truth_stmt)

        # □p → ◇p (necessity implies possibility)
        for stmt in self.modal_statements:
            if stmt.modality == AlethicModality.NECESSARY:
                poss_stmt = ModalStatement(stmt.proposition, AlethicModality.POSSIBLE)
                new_statements.append(poss_stmt)

        return new_statements

epistemic modality

concerns knowledge, belief, and certainty from the reasoner’s perspective - what is known or believed to be true.

knowledge (kp)

the reasoner knows that pp is true:

K(the password is "admin123") - agent knows the password
K(all swans in the database are white) - based on available data

belief (bp)

the reasoner believes that pp is true (may be mistaken):

B(it will rain tomorrow) - based on weather forecast
B(this email is spam) - based on classification model

uncertainty markers

degree of epistemic confidence:

probably(stock prices will fall) - high confidence
possibly(the server is overloaded) - moderate confidence
maybe(user prefers dark mode) - low confidence

implementation patterns

import numpy as np
from typing import Dict, Set

class EpistemicState:
    def __init__(self):
        self.knowledge = set()  # known facts (confidence = 1.0)
        self.beliefs = {}      # proposition -> confidence level
        self.evidence = {}     # proposition -> supporting evidence

    def knows(self, proposition):
        return proposition in self.knowledge

    def believes(self, proposition, threshold=0.7):
        return self.beliefs.get(proposition, 0.0) > threshold

    def add_knowledge(self, proposition, evidence=None):
        self.knowledge.add(proposition)
        if evidence:
            self.evidence[proposition] = evidence

    def update_belief(self, proposition, new_confidence, evidence=None):
        # bayesian belief updating
        prior = self.beliefs.get(proposition, 0.5)

        if evidence:
            # incorporate evidence strength
            evidence_strength = self.evaluate_evidence(evidence)
            posterior = self.bayesian_update(prior, new_confidence, evidence_strength)
        else:
            posterior = new_confidence

        self.beliefs[proposition] = posterior

        # promote to knowledge if highly confident with strong evidence
        if posterior > 0.95 and evidence and self.evaluate_evidence(evidence) > 0.9:
            self.add_knowledge(proposition, evidence)

    def bayesian_update(self, prior, likelihood, evidence_strength):
        # simplified bayesian updating
        evidence_weight = evidence_strength
        return (prior * (1 - evidence_weight) + likelihood * evidence_weight)

class EpistemicReasoner:
    def __init__(self):
        self.state = EpistemicState()

    def reason_about_knowledge(self):
        # apply epistemic axioms
        derived_beliefs = {}

        # knowledge implies belief: K(p) → B(p)
        for known_fact in self.state.knowledge:
            derived_beliefs[known_fact] = 1.0

        # positive introspection: K(p) → K(K(p))
        for known_fact in self.state.knowledge:
            meta_knowledge = f"K({known_fact})"
            derived_beliefs[meta_knowledge] = 1.0

        # belief consistency: B(p) ∧ B(¬p) → ⊥ (detect inconsistencies)
        for prop, confidence in self.state.beliefs.items():
            neg_prop = f"¬{prop}"
            if neg_prop in self.state.beliefs:
                inconsistency_level = min(confidence, self.state.beliefs[neg_prop])
                if inconsistency_level > 0.5:
                    self.handle_inconsistency(prop, neg_prop)

        return derived_beliefs

    def query_with_confidence(self, proposition):
        if self.state.knows(proposition):
            return {"truth_value": True, "confidence": 1.0, "basis": "knowledge"}
        elif proposition in self.state.beliefs:
            conf = self.state.beliefs[proposition]
            return {"truth_value": conf > 0.5, "confidence": conf, "basis": "belief"}
        else:
            return {"truth_value": None, "confidence": 0.5, "basis": "unknown"}

deontic modality

concerns obligations, permissions, and prohibitions in normative contexts - what should or may be done.

obligation (op)

action or state is required or obligatory:

O(submit tax returns by april 15) - legal obligation
O(backup database daily) - system requirement
O(obtain user consent for data collection) - regulatory obligation

permission (pp)

action or state is allowed or permissible:

P(employee may work from home) - workplace policy
P(user can delete their account) - system permission
P(researcher may access anonymized data) - ethical permission

prohibition (fp)

action or state is forbidden:

F(access confidential files without authorization) - security rule
F(discriminate based on protected characteristics) - legal prohibition
F(use system during maintenance window) - operational constraint

implementation in policy systems

from enum import Enum
from typing import List, Dict, Any

class DeonticModality(Enum):
    OBLIGATORY = "must"
    PERMITTED = "may"
    FORBIDDEN = "must_not"

class DeonticRule:
    def __init__(self, action, modality, conditions=None, authority=None):
        self.action = action
        self.modality = modality
        self.conditions = conditions or []
        self.authority = authority
        self.priority = 0

    def applies_to_context(self, context):
        return all(cond.satisfied(context) for cond in self.conditions)

class DeonticReasoner:
    def __init__(self):
        self.rules = []
        self.authority_hierarchy = {}

    def add_rule(self, action, modality, conditions=None, authority=None, priority=0):
        rule = DeonticRule(action, modality, conditions, authority)
        rule.priority = priority
        self.rules.append(rule)

    def evaluate_action(self, action, context):
        applicable_rules = [
            rule for rule in self.rules
            if rule.action == action and rule.applies_to_context(context)
        ]

        if not applicable_rules:
            return DeonticModality.PERMITTED  # default: permitted if not regulated

        # resolve conflicts by priority and authority
        applicable_rules.sort(key=lambda r: (r.priority, self.get_authority_weight(r.authority)),
                            reverse=True)

        highest_priority_rule = applicable_rules[0]

        # check for deontic conflicts
        obligations = [r for r in applicable_rules if r.modality == DeonticModality.OBLIGATORY]
        prohibitions = [r for r in applicable_rules if r.modality == DeonticModality.FORBIDDEN]

        if obligations and prohibitions:
            return self.resolve_deontic_conflict(obligations, prohibitions, context)

        return highest_priority_rule.modality

    def resolve_deontic_conflict(self, obligations, prohibitions, context):
        # handle O(p) ∧ F(p) conflicts

        # check for exception conditions
        for prohibition in prohibitions:
            for obligation in obligations:
                if self.has_exception_clause(obligation, prohibition, context):
                    return DeonticModality.OBLIGATORY

        # use authority hierarchy
        highest_auth_prohibition = max(prohibitions,
                                     key=lambda r: self.get_authority_weight(r.authority))
        highest_auth_obligation = max(obligations,
                                    key=lambda r: self.get_authority_weight(r.authority))

        if (self.get_authority_weight(highest_auth_prohibition.authority) >
            self.get_authority_weight(highest_auth_obligation.authority)):
            return DeonticModality.FORBIDDEN
        else:
            return DeonticModality.OBLIGATORY

# example: gdpr compliance system
class GDPRComplianceReasoner(DeonticReasoner):
    def __init__(self):
        super().__init__()
        self.setup_gdpr_rules()

    def setup_gdpr_rules(self):
        # data processing obligations
        self.add_rule(
            action="process_personal_data",
            modality=DeonticModality.OBLIGATORY,
            conditions=[
                lambda ctx: ctx.get('user_consent') == True,
                lambda ctx: ctx.get('purpose_legitimate') == True
            ],
            authority="gdpr_article_6",
            priority=10
        )

        # data processing prohibitions
        self.add_rule(
            action="process_personal_data",
            modality=DeonticModality.FORBIDDEN,
            conditions=[
                lambda ctx: ctx.get('user_consent') == False
            ],
            authority="gdpr_article_6",
            priority=10
        )

        # data subject rights
        self.add_rule(
            action="provide_data_access",
            modality=DeonticModality.OBLIGATORY,
            conditions=[
                lambda ctx: ctx.get('subject_request') == True,
                lambda ctx: ctx.get('identity_verified') == True
            ],
            authority="gdpr_article_15",
            priority=8
        )

uncertainty measures

probability

mathematical framework for quantifying likelihood:

bayesian probability

subjective degree of belief updated with evidence:

import numpy as np
from scipy.stats import beta

class BayesianUncertainty:
    def __init__(self, prior_alpha=1, prior_beta=1):
        # beta distribution parameters
        self.alpha = prior_alpha
        self.beta = prior_beta

    def probability(self):
        return self.alpha / (self.alpha + self.beta)

    def update(self, evidence_positive, evidence_negative):
        # update beta distribution with evidence
        self.alpha += evidence_positive
        self.beta += evidence_negative

    def confidence_interval(self, confidence_level=0.95):
        # credible interval for probability estimate
        lower = beta.ppf((1 - confidence_level) / 2, self.alpha, self.beta)
        upper = beta.ppf((1 + confidence_level) / 2, self.alpha, self.beta)
        return (lower, upper)

    def sample(self, n_samples=1000):
        return beta.rvs(self.alpha, self.beta, size=n_samples)

class ProbabilisticReasoner:
    def __init__(self):
        self.probabilities = {}
        self.dependencies = {}

    def set_probability(self, proposition, probability):
        self.probabilities[proposition] = probability

    def get_probability(self, proposition):
        return self.probabilities.get(proposition, 0.5)

    def add_dependency(self, conclusion, premises, conditional_prob_table):
        # p(conclusion | premises) = conditional_prob_table
        self.dependencies[conclusion] = {
            'premises': premises,
            'cpt': conditional_prob_table
        }

    def compute_joint_probability(self, propositions):
        # simplified joint probability computation
        # (assumes independence for missing dependencies)
        prob = 1.0
        for prop in propositions:
            prob *= self.get_probability(prop)
        return prob

    def marginalize(self, target, evidence):
        # compute p(target | evidence) using marginalization
        # simplified implementation
        total_prob = 0.0
        normalized_prob = 0.0

        for assignment in self.enumerate_assignments():
            if self.consistent_with_evidence(assignment, evidence):
                joint_prob = self.compute_assignment_probability(assignment)
                total_prob += joint_prob

                if assignment.get(target) == True:
                    normalized_prob += joint_prob

        return normalized_prob / total_prob if total_prob > 0 else 0.0

frequentist probability

based on observed frequencies in data:

class FrequentistUncertainty:
    def __init__(self):
        self.observations = {}

    def observe(self, event, outcome):
        if event not in self.observations:
            self.observations[event] = {'positive': 0, 'negative': 0}

        if outcome:
            self.observations[event]['positive'] += 1
        else:
            self.observations[event]['negative'] += 1

    def probability(self, event):
        if event not in self.observations:
            return 0.5  # uniform prior

        obs = self.observations[event]
        total = obs['positive'] + obs['negative']
        return obs['positive'] / total if total > 0 else 0.5

    def confidence_interval(self, event, confidence_level=0.95):
        # wilson score interval
        if event not in self.observations:
            return (0.0, 1.0)

        obs = self.observations[event]
        n = obs['positive'] + obs['negative']
        p = obs['positive'] / n if n > 0 else 0.5

        z = norm.ppf((1 + confidence_level) / 2)
        denominator = 1 + z**2 / n
        centre = (p + z**2 / (2*n)) / denominator
        margin = z * np.sqrt((p * (1-p) + z**2 / (4*n)) / n) / denominator

        return (max(0, centre - margin), min(1, centre + margin))

fuzzy values

degrees of membership in fuzzy sets - handles vagueness and gradual transitions:

class FuzzyValue:
    def __init__(self, value, membership_function=None):
        self.value = value  # crisp value
        self.membership_function = membership_function

    def membership_degree(self, fuzzy_set):
        if self.membership_function:
            return self.membership_function(self.value, fuzzy_set)
        else:
            return fuzzy_set.membership(self.value)

class FuzzySet:
    def __init__(self, name, membership_function):
        self.name = name
        self.membership_function = membership_function

    def membership(self, value):
        return self.membership_function(value)

    def union(self, other):
        # fuzzy union: max operator
        return FuzzySet(
            f"{self.name}_OR_{other.name}",
            lambda x: max(self.membership(x), other.membership(x))
        )

    def intersection(self, other):
        # fuzzy intersection: min operator
        return FuzzySet(
            f"{self.name}_AND_{other.name}",
            lambda x: min(self.membership(x), other.membership(x))
        )

    def complement(self):
        # fuzzy negation
        return FuzzySet(
            f"NOT_{self.name}",
            lambda x: 1.0 - self.membership(x)
        )

class FuzzyReasoner:
    def __init__(self):
        self.fuzzy_facts = {}
        self.fuzzy_rules = []

    def add_fuzzy_fact(self, proposition, membership_degree):
        self.fuzzy_facts[proposition] = membership_degree

    def add_fuzzy_rule(self, premises, conclusion, rule_strength=1.0):
        self.fuzzy_rules.append({
            'premises': premises,
            'conclusion': conclusion,
            'strength': rule_strength
        })

    def evaluate_fuzzy_rule(self, rule):
        # evaluate premises using t-norm (min operator)
        premise_strength = min([
            self.fuzzy_facts.get(premise, 0.0)
            for premise in rule['premises']
        ])

        # apply rule strength
        conclusion_strength = min(premise_strength, rule['strength'])

        return rule['conclusion'], conclusion_strength

    def fuzzy_inference(self):
        # apply all fuzzy rules
        derived_facts = {}

        for rule in self.fuzzy_rules:
            conclusion, strength = self.evaluate_fuzzy_rule(rule)

            # aggregate multiple rules for same conclusion (max operator)
            if conclusion in derived_facts:
                derived_facts[conclusion] = max(derived_facts[conclusion], strength)
            else:
                derived_facts[conclusion] = strength

        return derived_facts

# example: fuzzy temperature control
def create_temperature_fuzzy_sets():
    return {
        'cold': FuzzySet('cold', lambda x: max(0, (15 - x) / 15) if x <= 15 else 0),
        'comfortable': FuzzySet('comfortable',
                               lambda x: max(0, min((x - 15) / 5, (25 - x) / 5)) if 15 <= x <= 25 else 0),
        'hot': FuzzySet('hot', lambda x: max(0, (x - 25) / 15) if x >= 25 else 0)
    }

confidence measures

subjective or evidential measures of belief strength:

from dataclasses import dataclass
from typing import List, Optional

@dataclass
class Evidence:
    source: str
    reliability: float  # 0.0 to 1.0
    strength: float     # how strongly it supports the conclusion
    recency: float      # how recent (affects confidence decay)

class ConfidenceReasoner:
    def __init__(self):
        self.statements = {}
        self.evidence_base = {}

    def add_statement_with_evidence(self, statement, evidence_list):
        self.statements[statement] = evidence_list
        for evidence in evidence_list:
            if evidence.source not in self.evidence_base:
                self.evidence_base[evidence.source] = []
            self.evidence_base[evidence.source].append((statement, evidence))

    def compute_confidence(self, statement):
        if statement not in self.statements:
            return 0.0

        evidence_list = self.statements[statement]

        # weighted confidence aggregation
        total_weight = 0.0
        weighted_confidence = 0.0

        for evidence in evidence_list:
            # evidence weight combines reliability, strength, and recency
            weight = evidence.reliability * evidence.strength * evidence.recency
            confidence_contribution = weight * evidence.strength

            total_weight += weight
            weighted_confidence += confidence_contribution

        if total_weight == 0:
            return 0.0

        base_confidence = weighted_confidence / total_weight

        # adjust for evidence diversity
        diversity_bonus = self.compute_evidence_diversity(evidence_list)

        # adjust for contradictory evidence
        contradiction_penalty = self.compute_contradiction_penalty(statement)

        final_confidence = min(1.0, base_confidence + diversity_bonus - contradiction_penalty)
        return max(0.0, final_confidence)

    def compute_evidence_diversity(self, evidence_list):
        # bonus for evidence from diverse sources
        unique_sources = set(evidence.source for evidence in evidence_list)
        diversity_bonus = min(0.2, len(unique_sources) * 0.05)
        return diversity_bonus

    def compute_contradiction_penalty(self, statement):
        # penalty for existing contradictory evidence
        contradictory_statements = self.find_contradictory_statements(statement)

        penalty = 0.0
        for contra_stmt in contradictory_statements:
            contra_confidence = self.compute_base_confidence(contra_stmt)
            penalty += contra_confidence * 0.3  # reduce confidence by contradictory evidence

        return min(0.5, penalty)  # cap penalty at 50%

    def find_contradictory_statements(self, statement):
        # simplified contradiction detection
        contradictory = []

        for other_statement in self.statements:
            if self.are_contradictory(statement, other_statement):
                contradictory.append(other_statement)

        return contradictory

    def decay_confidence_over_time(self, time_elapsed):
        # reduce confidence of old evidence
        for statement, evidence_list in self.statements.items():
            for evidence in evidence_list:
                # exponential decay
                evidence.recency *= np.exp(-0.1 * time_elapsed)

applications in ai systems

uncertainty in machine learning

import torch
import torch.nn as nn
import torch.nn.functional as F

class BayesianNeuralNetwork(nn.Module):
    def __init__(self, input_size, hidden_size, output_size):
        super().__init__()
        self.fc1_mean = nn.Linear(input_size, hidden_size)
        self.fc1_logvar = nn.Linear(input_size, hidden_size)
        self.fc2_mean = nn.Linear(hidden_size, output_size)
        self.fc2_logvar = nn.Linear(hidden_size, output_size)

    def reparameterize(self, mu, logvar):
        std = torch.exp(0.5 * logvar)
        eps = torch.randn_like(std)
        return mu + eps * std

    def forward(self, x, n_samples=10):
        # sample from weight distributions
        predictions = []

        for _ in range(n_samples):
            h1_mu = self.fc1_mean(x)
            h1_logvar = self.fc1_logvar(x)
            h1 = self.reparameterize(h1_mu, h1_logvar)
            h1 = F.relu(h1)

            out_mu = self.fc2_mean(h1)
            out_logvar = self.fc2_logvar(h1)
            out = self.reparameterize(out_mu, out_logvar)

            predictions.append(out)

        # return mean and uncertainty
        predictions = torch.stack(predictions)
        mean_prediction = predictions.mean(dim=0)
        uncertainty = predictions.var(dim=0)

        return mean_prediction, uncertainty

class UncertaintyAwareClassifier:
    def __init__(self, model, uncertainty_threshold=0.1):
        self.model = model
        self.uncertainty_threshold = uncertainty_threshold

    def predict_with_uncertainty(self, x):
        with torch.no_grad():
            mean_pred, uncertainty = self.model(x)
            probabilities = F.softmax(mean_pred, dim=-1)

            max_prob, predicted_class = torch.max(probabilities, dim=-1)

            # flag uncertain predictions
            uncertain_mask = uncertainty.max(dim=-1)[0] > self.uncertainty_threshold

            return {
                'predictions': predicted_class,
                'probabilities': probabilities,
                'uncertainty': uncertainty,
                'is_uncertain': uncertain_mask,
                'confidence': max_prob
            }

    def should_defer_to_human(self, prediction_result):
        return (prediction_result['is_uncertain'].any() or
                prediction_result['confidence'].min() < 0.7)
class ModalLogicReasoner:
    def __init__(self):
        self.world_relations = {}  # world -> accessible worlds
        self.world_valuations = {}  # (world, proposition) -> truth value

    def add_world(self, world_id, accessible_worlds=None):
        self.world_relations[world_id] = accessible_worlds or []
        self.world_valuations[world_id] = {}

    def set_truth_value(self, world, proposition, value):
        if world not in self.world_valuations:
            self.world_valuations[world] = {}
        self.world_valuations[world][proposition] = value

    def evaluate_necessity(self, world, proposition):
        # □p is true at w iff p is true at all worlds accessible from w
        accessible_worlds = self.world_relations.get(world, [])

        for accessible_world in accessible_worlds:
            if not self.world_valuations.get(accessible_world, {}).get(proposition, False):
                return False

        return True

    def evaluate_possibility(self, world, proposition):
        # ◇p is true at w iff p is true at some world accessible from w
        accessible_worlds = self.world_relations.get(world, [])

        for accessible_world in accessible_worlds:
            if self.world_valuations.get(accessible_world, {}).get(proposition, False):
                return True

        return False

    def model_check(self, world, modal_formula):
        # recursive evaluation of modal formulas
        if modal_formula.operator == 'atom':
            return self.world_valuations.get(world, {}).get(modal_formula.proposition, False)

        elif modal_formula.operator == 'not':
            return not self.model_check(world, modal_formula.operand)

        elif modal_formula.operator == 'and':
            return (self.model_check(world, modal_formula.left) and
                   self.model_check(world, modal_formula.right))

        elif modal_formula.operator == 'necessity':
            return self.evaluate_necessity(world, modal_formula.proposition)

        elif modal_formula.operator == 'possibility':
            return self.evaluate_possibility(world, modal_formula.proposition)

        else:
            raise ValueError(f"Unknown operator: {modal_formula.operator}")

deontic reasoning in autonomous systems

class AutonomousAgentEthics:
    def __init__(self):
        self.deontic_reasoner = DeonticReasoner()
        self.action_queue = []
        self.ethical_constraints = []

    def setup_ethical_framework(self):
        # harm prevention principle
        self.deontic_reasoner.add_rule(
            action="cause_harm_to_human",
            modality=DeonticModality.FORBIDDEN,
            authority="asimov_first_law",
            priority=100
        )

        # privacy principle
        self.deontic_reasoner.add_rule(
            action="access_personal_data",
            modality=DeonticModality.FORBIDDEN,
            conditions=[lambda ctx: not ctx.get('user_consent', False)],
            authority="privacy_principle",
            priority=80
        )

        # truthfulness principle
        self.deontic_reasoner.add_rule(
            action="provide_information",
            modality=DeonticModality.OBLIGATORY,
            conditions=[lambda ctx: ctx.get('information_requested', False)],
            authority="truthfulness_principle",
            priority=60
        )

    def evaluate_action_permissibility(self, action, context):
        deontic_status = self.deontic_reasoner.evaluate_action(action, context)

        # compute ethical risk score
        risk_factors = self.assess_ethical_risks(action, context)
        risk_score = sum(risk_factors.values())

        return {
            'deontic_status': deontic_status,
            'ethical_risk_score': risk_score,
            'risk_factors': risk_factors,
            'recommendation': self.make_recommendation(deontic_status, risk_score)
        }

    def make_recommendation(self, deontic_status, risk_score):
        if deontic_status == DeonticModality.FORBIDDEN:
            return "DO_NOT_EXECUTE"
        elif deontic_status == DeonticModality.OBLIGATORY:
            return "MUST_EXECUTE"
        elif risk_score > 0.8:
            return "REQUEST_HUMAN_APPROVAL"
        elif risk_score > 0.5:
            return "EXECUTE_WITH_MONITORING"
        else:
            return "EXECUTE_FREELY"

integration patterns

combining modality and uncertainty

class ModalUncertaintyReasoner:
    def __init__(self):
        self.modal_statements = {}  # (proposition, modality) -> confidence
        self.evidential_support = {}

    def add_modal_belief(self, proposition, modality, confidence, evidence=None):
        key = (proposition, modality)
        self.modal_statements[key] = confidence

        if evidence:
            self.evidential_support[key] = evidence

    def query_modal_belief(self, proposition, modality):
        key = (proposition, modality)
        base_confidence = self.modal_statements.get(key, 0.0)

        # adjust for evidential support
        if key in self.evidential_support:
            evidence_strength = self.evaluate_evidence_strength(self.evidential_support[key])
            adjusted_confidence = base_confidence * evidence_strength
        else:
            adjusted_confidence = base_confidence * 0.5  # discount unsupported beliefs

        return adjusted_confidence

    def resolve_modal_conflicts(self):
        conflicts = []

        for (prop1, mod1), conf1 in self.modal_statements.items():
            for (prop2, mod2), conf2 in self.modal_statements.items():
                if self.are_modally_inconsistent(prop1, mod1, prop2, mod2):
                    conflicts.append({
                        'belief1': (prop1, mod1, conf1),
                        'belief2': (prop2, mod2, conf2),
                        'resolution': self.suggest_resolution(prop1, mod1, conf1, prop2, mod2, conf2)
                    })

        return conflicts

    def probabilistic_modal_reasoning(self, proposition):
        # compute probability distribution over modal values
        modal_probabilities = {}

        for modality in [AlethicModality.NECESSARY, AlethicModality.POSSIBLE,
                        AlethicModality.IMPOSSIBLE, AlethicModality.CONTINGENT]:
            confidence = self.query_modal_belief(proposition, modality)
            modal_probabilities[modality] = confidence

        # normalize to probability distribution
        total = sum(modal_probabilities.values())
        if total > 0:
            modal_probabilities = {mod: prob/total for mod, prob in modal_probabilities.items()}

        return modal_probabilities

uncertainty propagation in reasoning chains

class UncertaintyPropagationReasoner:
    def __init__(self):
        self.facts = {}  # fact -> (truth_value, uncertainty)
        self.rules = []

    def add_uncertain_fact(self, fact, truth_value, uncertainty):
        self.facts[fact] = (truth_value, uncertainty)

    def add_reasoning_rule(self, premises, conclusion, rule_confidence=1.0):
        self.rules.append({
            'premises': premises,
            'conclusion': conclusion,
            'confidence': rule_confidence
        })

    def propagate_uncertainty(self):
        derived_facts = {}

        for rule in self.rules:
            # compute premise uncertainty
            premise_uncertainties = []
            premise_truths = []

            for premise in rule['premises']:
                if premise in self.facts:
                    truth, uncertainty = self.facts[premise]
                    premise_truths.append(truth)
                    premise_uncertainties.append(uncertainty)
                else:
                    premise_truths.append(0.5)  # unknown
                    premise_uncertainties.append(1.0)   # maximum uncertainty

            # combine uncertainties (assuming independence)
            combined_uncertainty = self.combine_uncertainties(premise_uncertainties)

            # apply rule confidence
            conclusion_uncertainty = min(1.0, combined_uncertainty / rule['confidence'])

            # truth value propagation (simplified)
            conclusion_truth = min(premise_truths) if all(p > 0.5 for p in premise_truths) else 0.5

            derived_facts[rule['conclusion']] = (conclusion_truth, conclusion_uncertainty)

        return derived_facts

    def combine_uncertainties(self, uncertainties):
        # various uncertainty combination methods

        # method 1: maximum uncertainty
        # return max(uncertainties)

        # method 2: average uncertainty
        # return sum(uncertainties) / len(uncertainties)

        # method 3: multiplicative (assuming independence)
        combined = 1.0
        for u in uncertainties:
            combined *= (1 - u)
        return 1 - combined

        # method 4: dempster-shafer combination
        # (more complex, handles ignorance vs. conflict)

evaluation and validation

consistency checking

class ModalConsistencyChecker:
    def __init__(self):
        self.modal_axioms = [
            self.necessitation_axiom,
            self.distribution_axiom,
            self.knowledge_axiom,
            self.positive_introspection_axiom,
            self.negative_introspection_axiom
        ]

    def check_consistency(self, modal_statements):
        violations = []

        for axiom in self.modal_axioms:
            axiom_violations = axiom(modal_statements)
            violations.extend(axiom_violations)

        return violations

    def necessitation_axiom(self, statements):
        # □p → p
        violations = []
        for (prop, modality), confidence in statements.items():
            if modality == AlethicModality.NECESSARY and confidence > 0.5:
                # check if p is also believed to be true
                if (prop, None) not in statements or statements[(prop, None)][0] < 0.5:
                    violations.append(f"Necessitation violation: {prop} is necessary but not true")
        return violations

    def distribution_axiom(self, statements):
        # □(p → q) → (□p → □q)
        violations = []
        # implementation would check for this pattern in statements
        return violations

class UncertaintyCalibration:
    def __init__(self):
        self.predictions = []
        self.outcomes = []

    def add_prediction(self, confidence, actual_outcome):
        self.predictions.append(confidence)
        self.outcomes.append(actual_outcome)

    def compute_calibration_error(self, n_bins=10):
        # expected calibration error (ece)
        bin_boundaries = np.linspace(0, 1, n_bins + 1)
        bin_lowers = bin_boundaries[:-1]
        bin_uppers = bin_boundaries[1:]

        ece = 0
        for bin_lower, bin_upper in zip(bin_lowers, bin_uppers):
            # predictions in this confidence bin
            in_bin = np.logical_and(self.predictions > bin_lower,
                                  self.predictions <= bin_upper)
            prop_in_bin = in_bin.mean()

            if prop_in_bin > 0:
                accuracy_in_bin = self.outcomes[in_bin].mean()
                avg_confidence_in_bin = self.predictions[in_bin].mean()

                ece += np.abs(avg_confidence_in_bin - accuracy_in_bin) * prop_in_bin

        return ece

    def reliability_diagram(self):
        # plot confidence vs accuracy for calibration analysis
        import matplotlib.pyplot as plt

        # bin predictions by confidence
        bins = np.linspace(0, 1, 11)
        bin_centers = (bins[:-1] + bins[1:]) / 2

        bin_accuracies = []
        bin_confidences = []
        bin_counts = []

        for i in range(len(bins) - 1):
            in_bin = np.logical_and(self.predictions >= bins[i],
                                  self.predictions < bins[i+1])

            if in_bin.sum() > 0:
                bin_accuracy = self.outcomes[in_bin].mean()
                bin_confidence = self.predictions[in_bin].mean()
                bin_count = in_bin.sum()

                bin_accuracies.append(bin_accuracy)
                bin_confidences.append(bin_confidence)
                bin_counts.append(bin_count)

        return bin_centers[:len(bin_accuracies)], bin_accuracies, bin_confidences, bin_counts

advantages and limitations

advantages

expressive power: capture nuanced truth conditions beyond binary true/false natural reasoning: align with how humans naturally express and process information uncertainty handling: explicit treatment of confidence and reliability domain flexibility: applicable across diverse reasoning contexts formal foundations: well-established logical and mathematical frameworks

limitations

computational complexity: modal reasoning can be pspace-complete or worse interpretation ambiguity: multiple reasonable interpretations of modal operators calibration challenges: difficult to ensure uncertainty measures are well-calibrated combination problems: no universally agreed methods for combining different uncertainty types validation difficulties: hard to verify correctness of modal and uncertain reasoning

further study

foundational texts

  • hughes & cresswell: “a new introduction to modal logic” (comprehensive modal logic reference)
  • halpern: “reasoning about uncertainty” (probability and modal logic integration)
  • pearl: “probabilistic reasoning in intelligent systems” (bayesian networks and uncertainty)
  • zadeh: “fuzzy sets” (original fuzzy logic paper)

computational approaches

  • fagin & halpern: “reasoning about knowledge and probability” (epistemic probability)
  • bacchus: “representing and reasoning with probabilistic knowledge” (first-order probability)
  • dubois & prade: “possibility theory” (alternative to probability theory)
  • klir & yuan: “fuzzy sets and fuzzy logic” (practical fuzzy reasoning)

applications

  • russell & norvig: “artificial intelligence: a modern approach” (uncertainty in ai)
  • koller & friedman: “probabilistic graphical models” (structured uncertainty representation)
  • van der hoek & meyer: “epistemic logic for ai and computer science” (knowledge representation)
  • mccarthy: “applications of circumscription to formalizing common-sense knowledge” (non-monotonic reasoning)

implementation resources

  • probabilistic programming languages: pyro, edward, tfp
  • modal logic theorem provers: lotrec, tableaux
  • fuzzy logic toolkits: scikit-fuzzy, pyfuzzy
  • bayesian reasoning libraries: pymc3, stan, infer.net
══════════════════════════════════════════════════════════════════
on this page