
Christian Treude developed and standardized prompt reporting and usage guidelines for empirical software engineering studies involving large language models in the se-ubt/llm-guidelines-website repository. He focused on enhancing transparency and reproducibility by implementing explicit reporting requirements, expanding practical examples, and refining documentation through incremental, reviewer-driven updates. Leveraging technical writing, BibTeX, and LaTeX, Christian curated comprehensive bibliographies and maintained rigorous version control to ensure traceability and collaboration. His work established clear standards for reporting prompts and interaction logs, enabling faster onboarding and stronger governance for researchers. The depth of his contributions provided a robust foundation for future experiments and peer review.

May 2025 summary for se-ubt/llm-guidelines-website: Delivered LLM Usage Reporting Guidelines Standardization, establishing required reporting standards for prompts and interaction logs across LLM study types to enhance transparency and reproducibility in SE research. Linked to commit 0ed133c3d481d5c57b9f5a4d30a7cfcb6e80776b (Address #72).
May 2025 summary for se-ubt/llm-guidelines-website: Delivered LLM Usage Reporting Guidelines Standardization, establishing required reporting standards for prompts and interaction logs across LLM study types to enhance transparency and reproducibility in SE research. Linked to commit 0ed133c3d481d5c57b9f5a4d30a7cfcb6e80776b (Address #72).
Month: 2025-03 — No major bugs fixed; primary focus on documentation improvements and quality standards. Key feature delivered: consolidated LLM Usage Documentation and Guidelines for Empirical Software Engineering Studies in se-ubt/llm-guidelines-website, including prompt reporting guidelines and literature coverage. Iterations driven by reviewer feedback (commits 18a8f8ea7b36977166e2b61c627286e4e08fe0da; 2c31189ea2c3213a992559a8f98502f7dd33d4dc; ae7026d33de5594d7ff1381966375bf8dc94a41e). Overall impact: enhances transparency, reproducibility, and comprehension for empirical SE studies using LLMs as raters, enabling faster onboarding and stronger governance. Technologies/skills: technical writing, evidence synthesis, prompt design, version control, cross-team collaboration.
Month: 2025-03 — No major bugs fixed; primary focus on documentation improvements and quality standards. Key feature delivered: consolidated LLM Usage Documentation and Guidelines for Empirical Software Engineering Studies in se-ubt/llm-guidelines-website, including prompt reporting guidelines and literature coverage. Iterations driven by reviewer feedback (commits 18a8f8ea7b36977166e2b61c627286e4e08fe0da; 2c31189ea2c3213a992559a8f98502f7dd33d4dc; ae7026d33de5594d7ff1381966375bf8dc94a41e). Overall impact: enhances transparency, reproducibility, and comprehension for empirical SE studies using LLMs as raters, enabling faster onboarding and stronger governance. Technologies/skills: technical writing, evidence synthesis, prompt design, version control, cross-team collaboration.
February 2025: Delivered targeted enhancements to the se-ubt/llm-guidelines-website focusing on prompt reporting in empirical software engineering studies involving LLMs. Implemented explicit MUST/SHOULD reporting requirements, expanded examples, and formatting improvements to enhance clarity and reproducibility. Updated BibTeX bibliography with entries for relevant LLM prompts research (strengthening the literature base and reproducibility), including references such as Liang et al. 2024. Maintained rigorous documentation through incremental commits: initial draft prompts, formatting fixes, added examples, and new references. Result: improved study reproducibility, faster onboarding for researchers, and a solid foundation for future experiments and peer reviews. Technologies/skills: technical writing, BibTeX curation, version control discipline, and empirical-methodology practices for AI in software engineering.
February 2025: Delivered targeted enhancements to the se-ubt/llm-guidelines-website focusing on prompt reporting in empirical software engineering studies involving LLMs. Implemented explicit MUST/SHOULD reporting requirements, expanded examples, and formatting improvements to enhance clarity and reproducibility. Updated BibTeX bibliography with entries for relevant LLM prompts research (strengthening the literature base and reproducibility), including references such as Liang et al. 2024. Maintained rigorous documentation through incremental commits: initial draft prompts, formatting fixes, added examples, and new references. Result: improved study reproducibility, faster onboarding for researchers, and a solid foundation for future experiments and peer reviews. Technologies/skills: technical writing, BibTeX curation, version control discipline, and empirical-methodology practices for AI in software engineering.
Overview of all repositories you've contributed to across your timeline