
Andreas Hochmuth developed AI-assisted grading and feedback features for the umgc/2025_fall repository, focusing on scalable rubric-driven assessment and export reliability. He integrated dynamic rubric handling and a prompt service for customizable feedback, using Dart and Flutter for both backend and UI development. His work included dynamic LLM selection logic, supporting multiple models with prioritized API key discovery and fallback mechanisms, and refactored prompt engineering to separate objective grading from subjective feedback with strict JSON output. Andreas also enhanced the submission detail UI, enabling configurable grading settings and reliable PDF/Excel exports, resulting in a robust, user-focused grading and reporting workflow.

October 2025 monthly summary for umgc/2025_fall: Focused on delivering AI-assisted grading and feedback improvements with three main contributions. LLM Selection Logic Improvements introduced dynamic API key discovery and prioritized model selection (ChatGPT, Grok, Deepseek, Perplexity) with a safe fallback to ChatGPT; simplified API key checks. Commits: a68bc753a90e06dda17bb28e2c8d6589be158459; 5033f0bb45b1f27675d4c5e62049214f4571dec9. Grading and Feedback Prompt Improvements refactored the LLM prompting to separate objective grading from subjective feedback, enforced objective grading unaffected by tone/detail settings, and standardized output with strict JSON formatting; enhanced rubric prompt with explicit tone and level-of-detail definitions. Commits: 957389ba3aa105e76c23a1601556b8589c9da234; f2e2dde62decd4b7cef3c8ea56c2990062e0e05a. Submission Detail UI Enhancements for AI Grading and Feedback added UI elements to submission detail view to display AI grading settings, information bar, grade percentage, and allow grade-level selection for feedback generation; ensure grade-related data is visible and customizable. Commits: 01d01d8504cb77d2fbc3a3346e6419769b6bcfa5; 3b6bf68c2587088ab3e5d6a21799284f4885c4ec; fbac1ff6fb5b49db521af0fc4d504ffbaa64fa4b.
October 2025 monthly summary for umgc/2025_fall: Focused on delivering AI-assisted grading and feedback improvements with three main contributions. LLM Selection Logic Improvements introduced dynamic API key discovery and prioritized model selection (ChatGPT, Grok, Deepseek, Perplexity) with a safe fallback to ChatGPT; simplified API key checks. Commits: a68bc753a90e06dda17bb28e2c8d6589be158459; 5033f0bb45b1f27675d4c5e62049214f4571dec9. Grading and Feedback Prompt Improvements refactored the LLM prompting to separate objective grading from subjective feedback, enforced objective grading unaffected by tone/detail settings, and standardized output with strict JSON formatting; enhanced rubric prompt with explicit tone and level-of-detail definitions. Commits: 957389ba3aa105e76c23a1601556b8589c9da234; f2e2dde62decd4b7cef3c8ea56c2990062e0e05a. Submission Detail UI Enhancements for AI Grading and Feedback added UI elements to submission detail view to display AI grading settings, information bar, grade percentage, and allow grade-level selection for feedback generation; ensure grade-related data is visible and customizable. Commits: 01d01d8504cb77d2fbc3a3346e6419769b6bcfa5; 3b6bf68c2587088ab3e5d6a21799284f4885c4ec; fbac1ff6fb5b49db521af0fc4d504ffbaa64fa4b.
For 2025-09, focused on AI-assisted grading enhancements and export reliability in the umgc/2025_fall repo. Delivered AI Grading with Dynamic Rubrics and a Prompt Service, refactored the essay editing flow to handle dynamic rubric data, and integrated AI-powered grading into the submission view. Restored robust PDF/Excel export downloads and added configurable controls over LLM output to improve feedback quality and compliance. The work positions the platform to scale rubric-driven assessment and deliver consistent, exportable reports for educators and students.
For 2025-09, focused on AI-assisted grading enhancements and export reliability in the umgc/2025_fall repo. Delivered AI Grading with Dynamic Rubrics and a Prompt Service, refactored the essay editing flow to handle dynamic rubric data, and integrated AI-powered grading into the submission view. Restored robust PDF/Excel export downloads and added configurable controls over LLM output to improve feedback quality and compliance. The work positions the platform to scale rubric-driven assessment and deliver consistent, exportable reports for educators and students.
Overview of all repositories you've contributed to across your timeline