πŸ€– Automated Diachronic Workflow

One-click automation for your corpus processing!

← Back to Main Hub

⚑ Quick Automation Tools

Click any button to automate that part of your workflow!

πŸ“š

Step 1: Automated Text Collection

Choose your collection method:

Collection Status:

🎯 Quick Perseus Texts (Fixed URLs)

πŸ“– Direct Links to Popular Texts:
πŸ”

Step 2: Automated Parsing & Annotation

Paste your text here for instant parsing:

Parsing Results:

# Automated Parsing Pipeline
1. Tokenization β†’ Split into words
2. Morphological analysis β†’ Identify forms
3. Dependency parsing β†’ Syntactic structure
4. Valency detection β†’ Argument patterns
5. Export to CoNLL-U β†’ Standard format
πŸ€–

Step 3: Automated AI Analysis

🎯 Quick AI Tools

πŸ“Š Analysis Options




AI Analysis Results:

πŸ“Š

Step 4: Automated Export & Share

Export Status:

πŸš€ Complete Automation

Run the entire workflow with one click!

πŸ“Š Workflow Results

Processing... This is actually working!

🐍 Ready-to-Use Python Scripts

Copy these scripts to automate your workflow locally:

# automated_workflow.py import urllib.request import json from pathlib import Path class DiachronicAutomation: def __init__(self): self.corpus_dir = Path("AthensDiaLingCorpus") self.corpus_dir.mkdir(exist_ok=True) # Fixed Perseus URLs self.perseus_texts = { 'iliad': 'urn:cts:greekLit:tlg0012.tlg001.perseus-grc2:1.1', 'odyssey': 'urn:cts:greekLit:tlg0012.tlg002.perseus-grc2:1.1', 'nt_matthew': 'urn:cts:greekLit:tlg0031.tlg001.perseus-grc2:1.1', 'aeneid': 'urn:cts:latinLit:phi0690.phi003.perseus-lat2:1.1' } def collect_perseus_text(self, text_key): """Auto-collect from Perseus with fixed URLs""" if text_key in self.perseus_texts: urn = self.perseus_texts[text_key] url = f"https://scaife-cts.perseus.org/api/cts?request=GetPassage&urn={urn}" try: with urllib.request.urlopen(url) as response: data = response.read().decode('utf-8') print(f"βœ… Downloaded {text_key}") return data except Exception as e: print(f"❌ Error: {e}") # Fallback to web interface print(f"Visit: https://scaife.perseus.org/reader/{urn}") return None def parse_text(self, text): """Basic parsing""" # Tokenize tokens = text.split() # Basic word frequency freq = {} for token in tokens: freq[token] = freq.get(token, 0) + 1 return {"tokens": len(tokens), "unique": len(freq), "frequency": freq} def run_workflow(self, text_key): print(f"πŸš€ Starting automated workflow for {text_key}") # Step 1: Collect print("πŸ“š Collecting text...") text = self.collect_perseus_text(text_key) if text: # Step 2: Parse print("πŸ” Parsing...") parsed = self.parse_text(text) # Step 3: Save print("πŸ’Ύ Saving results...") output_path = self.corpus_dir / f"{text_key}_results.json" with open(output_path, 'w', encoding='utf-8') as f: json.dump(parsed, f, indent=2, ensure_ascii=False) print(f"βœ… Complete! Results saved to {output_path}") return parsed else: print("❌ Collection failed. Please use web interface.") # Usage automation = DiachronicAutomation() automation.run_workflow("iliad")

πŸ’‘ Tips for Local Automation:

  • Install required packages: pip install cltk stanza spacy
  • For Greek BERT: pip install transformers torch
  • Use virtual environment: python -m venv venv
  • Save results in JSON for easy import