reasoning frameworks
on this page
definition
an argumentation framework is a formal structure for representing and evaluating arguments and their relationships. introduced by dung (1995), these frameworks model reasoning as a process of constructing, attacking, and defending arguments rather than manipulating logical formulas.
argumentation frameworks abstract away from the internal structure of arguments, focusing instead on the attack and support relationships between them. this abstraction enables systematic analysis of which arguments should be accepted given conflicting evidence and reasoning patterns.
abstract argumentation frameworks
dung frameworks
the basic dung framework consists of:
af = (ar, att)
where:
- ar = set of arguments {a₁, a₂, ..., aₙ}
- att = attack relation ⊆ ar × ar
attack relation: (a, b) ∈ att
means argument a
attacks argument b
class DungFramework:
def __init__(self):
self.arguments = set()
self.attacks = set() # set of (attacker, attacked) pairs
def add_argument(self, arg):
self.arguments.add(arg)
def add_attack(self, attacker, attacked):
self.attacks.add((attacker, attacked))
def attackers_of(self, arg):
return {att for att, def_ in self.attacks if def_ == arg}
def attacked_by(self, arg):
return {def_ for att, def_ in self.attacks if att == arg}
example framework
arguments: {a, b, c, d}
attacks: {(a,b), (b,c), (c,d), (d,c)}
visualization:
a → b → c ⇄ d
argument a
attacks b
, b
attacks c
, and c
and d
attack each other mutually.
attack semantics
direct attack: argument a
directly attacks argument b
climate_change_real attacks climate_change_hoax
defeat: successful attack that renders the target unacceptable
if climate_change_real is accepted, then climate_change_hoax is defeated
defense: argument a
defends b
if a
attacks all attackers of b
peer_review_consensus defends climate_change_real by attacking
arguments that attack climate_change_real
acceptability semantics
conflict-free sets
a set is conflict-free if no argument in attacks another argument in :
∀a,b ∈ S: (a,b) ∉ ATT
def is_conflict_free(self, arg_set):
for a in arg_set:
for b in arg_set:
if (a, b) in self.attacks:
return False
return True
admissible sets
a conflict-free set is admissible if it defends all its members:
∀a ∈ S: ∀b ∈ AR: (b,a) ∈ ATT → ∃c ∈ S: (c,b) ∈ ATT
every argument in an admissible set is defended by some argument in the set.
def is_admissible(self, arg_set):
if not self.is_conflict_free(arg_set):
return False
for arg in arg_set:
attackers = self.attackers_of(arg)
for attacker in attackers:
# check if arg_set defends against this attacker
defender_exists = any(
(defender, attacker) in self.attacks
for defender in arg_set
)
if not defender_exists:
return False
return True
complete extensions
an admissible set is complete if it contains all arguments it defends:
∀a ∈ AR: (S defends a) → a ∈ S
complete extensions represent coherent positions where you accept everything you can rationally defend.
grounded extension
the grounded extension is the unique minimal complete extension:
def compute_grounded_extension(self):
in_args = set()
out_args = set()
changed = True
while changed:
changed = False
# add unattacked arguments
for arg in self.arguments:
if arg not in in_args and arg not in out_args:
attackers = self.attackers_of(arg)
if attackers.issubset(out_args):
in_args.add(arg)
changed = True
# remove arguments attacked by accepted arguments
for arg in self.arguments:
if arg not in out_args:
if any((accepted, arg) in self.attacks for accepted in in_args):
out_args.add(arg)
changed = True
return in_args
preferred extensions
preferred extensions are maximal admissible sets:
def compute_preferred_extensions(self):
all_admissible = self.find_all_admissible_sets()
preferred = []
for candidate in all_admissible:
is_maximal = True
for other in all_admissible:
if candidate < other: # proper subset
is_maximal = False
break
if is_maximal:
preferred.append(candidate)
return preferred
stable extensions
a conflict-free set is stable if it attacks every argument not in :
∀a ∉ S: ∃b ∈ S: (b,a) ∈ ATT
stable extensions provide complete partitions of arguments into accepted and rejected.
bipolar frameworks
extend basic frameworks with support relations:
baf = (ar, att, sup)
where:
- ar = arguments
- att = attack relation
- sup = support relation ⊆ ar × ar
support semantics
evidential support: argument provides evidence for another
dna_match supports defendant_guilty
deductive support: argument logically implies another
socrates_is_mortal supports all_humans_mortal
necessary support: argument is required for another’s acceptance
witness_credible supports witness_testimony_valid
mediated attack
support enables mediated attacks through supported arguments:
if a supports b, and c attacks b,
then c indirectly attacks the support structure of b
class BipolarFramework(DungFramework):
def __init__(self):
super().__init__()
self.supports = set() # (supporter, supported) pairs
def add_support(self, supporter, supported):
self.supports.add((supporter, supported))
def compute_mediated_attacks(self):
mediated = set()
for att, target in self.attacks:
# find what target supports
for supp, supported in self.supports:
if supp == target:
mediated.add((att, supported, 'mediated'))
return mediated
structured argumentation
aspic+ framework
combines abstract argumentation with logical structure:
class AspicArgument:
def __init__(self, premises, conclusion, inference_rules):
self.premises = premises
self.conclusion = conclusion
self.rules = inference_rules
self.subarguments = []
def construct_argument(self, knowledge_base, rules):
# build argument from premises using inference rules
pass
def find_attack_points(self, other_argument):
attack_points = []
# attacking premises
for premise in other_argument.premises:
if self.conclusion == f"¬{premise}":
attack_points.append(('premise', premise))
# attacking inferences
for rule in other_argument.rules:
if self.attacks_rule(rule):
attack_points.append(('inference', rule))
return attack_points
argument schemes
argumentation schemes provide templates for common reasoning patterns:
class ArgumentScheme:
def __init__(self, name, premises, conclusion, critical_questions):
self.name = name
self.premises = premises
self.conclusion = conclusion
self.critical_questions = critical_questions
# example: argument from expert opinion
expert_opinion_scheme = ArgumentScheme(
name="argument_from_expert_opinion",
premises=[
"source e is an expert in domain d",
"e asserts that p in domain d",
"p is within domain d"
],
conclusion="p is true",
critical_questions=[
"is e a genuine expert in d?",
"did e actually assert p?",
"is p within e's area of expertise?",
"is e reliable and trustworthy?",
"do other experts agree?"
]
)
critical questions as attacks
critical questions generate potential attack points:
def generate_attacks_from_scheme(scheme, argument):
attacks = []
for question in scheme.critical_questions:
# if critical question can be answered negatively,
# it provides an attack against the scheme-based argument
if can_answer_negatively(question, argument):
attack_arg = construct_attack_from_question(question, argument)
attacks.append(attack_arg)
return attacks
probabilistic argumentation
probabilistic attack
attack strength varies by probability:
class ProbabilisticFramework:
def __init__(self):
self.arguments = set()
self.attack_probs = {} # (att, def) -> probability
def add_probabilistic_attack(self, attacker, attacked, probability):
self.attack_probs[(attacker, attacked)] = probability
def compute_argument_probability(self, argument):
# compute probability argument survives all attacks
survival_prob = 1.0
for attacker, attacked in self.attack_probs:
if attacked == argument:
attack_prob = self.attack_probs[(attacker, attacked)]
attacker_prob = self.compute_argument_probability(attacker)
# probability this attack succeeds
effective_attack = attack_prob * attacker_prob
survival_prob *= (1 - effective_attack)
return survival_prob
bayesian argumentation
update argument probabilities with new evidence:
import numpy as np
class BayesianArgumentation:
def __init__(self):
self.argument_priors = {}
self.evidence_likelihoods = {}
def update_beliefs(self, evidence):
# bayesian updating of argument probabilities
for arg in self.argument_priors:
prior = self.argument_priors[arg]
likelihood = self.evidence_likelihoods.get((evidence, arg), 0.5)
# simple bayesian update
marginal = sum(
self.argument_priors[other] *
self.evidence_likelihoods.get((evidence, other), 0.5)
for other in self.argument_priors
)
posterior = (prior * likelihood) / marginal
self.argument_priors[arg] = posterior
implementation in ai systems
automated debate systems
class DebateSystem:
def __init__(self):
self.framework = DungFramework()
self.argument_pool = ArgumentPool()
def conduct_debate(self, topic, participants):
debate_state = {
'current_arguments': set(),
'turn': 0,
'participants': participants
}
while not self.debate_finished(debate_state):
current_player = participants[debate_state['turn'] % len(participants)]
# player constructs argument or attack
move = current_player.generate_move(
topic,
debate_state['current_arguments']
)
if move.type == 'argument':
self.framework.add_argument(move.argument)
debate_state['current_arguments'].add(move.argument)
elif move.type == 'attack':
self.framework.add_attack(move.attacker, move.target)
debate_state['turn'] += 1
# evaluate final positions
extensions = self.framework.compute_preferred_extensions()
return self.determine_winner(extensions, participants)
argument mining
extract arguments from natural language text:
class ArgumentMiner:
def __init__(self):
self.claim_classifier = ClaimClassifier()
self.premise_detector = PremiseDetector()
self.attack_detector = AttackDetector()
def mine_arguments(self, text):
sentences = self.preprocess(text)
arguments = []
for sent in sentences:
if self.claim_classifier.is_claim(sent):
# find supporting/attacking premises
premises = self.premise_detector.find_premises(sent, sentences)
attacks = self.attack_detector.find_attacks(sent, sentences)
arg = StructuredArgument(premises, sent)
arguments.append(arg)
# construct framework from mined arguments
framework = self.build_framework(arguments)
return framework
legal reasoning systems
model legal argumentation:
class LegalArgumentationFramework:
def __init__(self):
self.framework = DungFramework()
self.legal_rules = LegalRuleBase()
self.precedents = PrecedentDatabase()
def analyze_case(self, case_facts, legal_question):
# generate arguments from legal rules
rule_arguments = self.generate_rule_arguments(case_facts)
# generate arguments from precedents
precedent_arguments = self.generate_precedent_arguments(case_facts)
# find attacks based on legal distinctions
attacks = self.find_legal_attacks(
rule_arguments + precedent_arguments
)
# build framework and compute extensions
for arg in rule_arguments + precedent_arguments:
self.framework.add_argument(arg)
for att, def_ in attacks:
self.framework.add_attack(att, def_)
extensions = self.framework.compute_preferred_extensions()
return self.legal_conclusion(extensions, legal_question)
evaluation metrics
extension-based measures
skeptical acceptance: argument is in all preferred extensions credulous acceptance: argument is in some preferred extension
def evaluate_argument_acceptance(framework, argument):
extensions = framework.compute_preferred_extensions()
skeptical = all(argument in ext for ext in extensions)
credulous = any(argument in ext for ext in extensions)
return {
'skeptical': skeptical,
'credulous': credulous,
'extension_ratio': sum(1 for ext in extensions if argument in ext) / len(extensions)
}
dialectical strength
measure argument’s resilience to attack:
def compute_dialectical_strength(framework, argument):
attacks_received = len(framework.attackers_of(argument))
attacks_made = len(framework.attacked_by(argument))
# arguments defending this argument
defenders = []
for defender in framework.arguments:
if framework.defends(defender, argument):
defenders.append(defender)
strength = (attacks_made + len(defenders)) / max(1, attacks_received)
return strength
convergence analysis
measure stability of argumentation process:
def analyze_convergence(framework_sequence):
extension_changes = []
for i in range(1, len(framework_sequence)):
prev_extensions = framework_sequence[i-1].compute_preferred_extensions()
curr_extensions = framework_sequence[i].compute_preferred_extensions()
# measure change in extensions
change = jaccard_distance(prev_extensions, curr_extensions)
extension_changes.append(change)
return {
'converged': extension_changes[-1] < 0.01,
'convergence_rate': np.mean(extension_changes),
'stability_trend': np.polyfit(range(len(extension_changes)), extension_changes, 1)[0]
}
applications
medical decision making
model diagnostic reasoning:
arguments:
- patient_has_symptom_x
- symptom_x_indicates_disease_a
- patient_history_supports_disease_a
- test_result_negative_for_disease_a
attacks:
- test_result_negative attacks symptom_x_indicates_disease_a
- alternative_explanation attacks symptom_x_indicates_disease_a
policy debate
analyze competing policy proposals:
arguments:
- policy_a_reduces_costs
- policy_a_improves_outcomes
- policy_b_more_equitable
- policy_b_easier_to_implement
attacks:
- implementation_difficulties attack policy_a_reduces_costs
- equity_concerns attack policy_a_improves_outcomes
software requirements
resolve conflicting requirements:
class RequirementsFramework:
def __init__(self):
self.framework = DungFramework()
self.stakeholder_priorities = {}
def analyze_requirements_conflicts(self, requirements):
for req in requirements:
self.framework.add_argument(req)
# find conflicting requirements
for other_req in requirements:
if self.conflicts(req, other_req):
# priority-based attacks
if self.get_priority(req) > self.get_priority(other_req):
self.framework.add_attack(req, other_req)
extensions = self.framework.compute_preferred_extensions()
return self.prioritize_requirements(extensions)
computational complexity
decision problems
credulous acceptance: np-complete for preferred semantics skeptical acceptance: Π₂ᵖ-complete for preferred semantics extension existence: polynomial for grounded, np-complete for stable
optimization approaches
approximation algorithms:
def approximate_preferred_extensions(framework, max_iterations=1000):
# greedy approximation
remaining_args = set(framework.arguments)
extensions = []
while remaining_args:
# start with random argument
seed = random.choice(list(remaining_args))
extension = {seed}
# greedily add compatible arguments
for _ in range(max_iterations):
candidates = [arg for arg in remaining_args
if framework.is_compatible(extension, arg)]
if not candidates:
break
best_candidate = max(candidates,
key=lambda a: framework.defense_strength(a))
extension.add(best_candidate)
remaining_args.discard(best_candidate)
extensions.append(extension)
remaining_args -= extension
return extensions
parallel computation:
from multiprocessing import Pool
import itertools
def compute_extensions_parallel(framework, num_processes=4):
# partition argument space
arg_subsets = partition_arguments(framework.arguments, num_processes)
# compute extensions for each partition
with Pool(num_processes) as pool:
partial_extensions = pool.map(
compute_partial_extension,
[(framework, subset) for subset in arg_subsets]
)
# merge results
return merge_extensions(partial_extensions)
advantages and limitations
advantages
abstraction: separates argument evaluation from argument construction generality: applicable across diverse reasoning domains
formal semantics: precise mathematical foundations computational tractability: polynomial algorithms for many problems natural modeling: captures intuitive notions of argument and defeat
limitations
structural abstraction: loses important details about argument content computational complexity: many problems are np-hard or worse semantic ambiguity: multiple reasonable interpretations of attack dynamic challenges: difficult to model evolving argument structures preference integration: limited support for stakeholder preferences
extensions and variants
value-based argumentation
incorporate audience values:
class ValueBasedFramework(DungFramework):
def __init__(self):
super().__init__()
self.argument_values = {} # arg -> promoted values
self.value_orderings = {} # audience -> value ranking
def defeats(self, attacker, attacked, audience):
# attack succeeds only if attacker promotes
# higher-ranked values than attacked
att_values = self.argument_values.get(attacker, set())
def_values = self.argument_values.get(attacked, set())
ordering = self.value_orderings.get(audience, {})
return any(
ordering.get(av, 0) > ordering.get(dv, 0)
for av in att_values for dv in def_values
)
temporal argumentation
handle changing arguments over time:
class TemporalFramework:
def __init__(self):
self.frameworks = {} # time -> framework
self.argument_lifespans = {} # arg -> (start, end)
def add_timed_argument(self, argument, start_time, end_time=None):
self.argument_lifespans[argument] = (start_time, end_time)
for time in range(start_time, end_time or float('inf')):
if time not in self.frameworks:
self.frameworks[time] = DungFramework()
self.frameworks[time].add_argument(argument)
def compute_temporal_extensions(self):
return {time: fw.compute_preferred_extensions()
for time, fw in self.frameworks.items()}
further study
foundational papers
- dung: “on the acceptability of arguments and its fundamental role in nonmonotonic reasoning” (1995)
- caminada & gabbay: “a logical account of formal argumentation” (2009)
- bench-capon: “argumentation in artificial intelligence” (2007)
computational approaches
- egly, gaggl & woltran: “answer-set programming encodings for argumentation frameworks” (2010)
- charwat et al.: “methods for solving reasoning problems in abstract argumentation” (2015)
- gaggl & woltran: “the argumentation problem in abstract argumentation frameworks” (2013)
applications
- prakken & sartor: “argument-based extended logic programming with defeasible priorities” (1997)
- gordon, prakken & walton: “the carneades model of argument and burden of proof” (2007)
- besnard & hunter: “elements of argumentation” (2008)
implementation frameworks
- dung-o-matic: online argumentation framework solver
- aspic+: structured argumentation framework
- carneades: legal argumentation system
- toast: probabilistic argumentation toolkit
══════════════════════════════════════════════════════════════════