
Francesco Calefato enhanced the se-ubt/llm-guidelines-website by developing and refining research documentation for LLM-assisted annotation in software engineering. He expanded literature coverage, clarified evaluation setups, and improved the structure and readability of technical sections using LaTeX and BibTeX. His work included reorganizing content for better onboarding, updating bibliographies, and ensuring accurate symbol rendering in documentation. Francesco applied skills in AI evaluation, academic writing, and natural language processing to deliver research-backed updates, streamline LaTeX source files, and maintain documentation consistency. These contributions improved research traceability, enabled faster decision-making, and supported reproducibility for researchers working with LLM-based annotation workflows.

April 2025 monthly summary for se-ubt/llm-guidelines-website focusing on delivering research-backed updates and maintaining documentation quality. Delivered a new literature bibliography entry on LLMs as annotators in software engineering and cleaned up the LaTeX study-types section to reduce clutter, enabling easier future enhancements and better scholarly alignment.
April 2025 monthly summary for se-ubt/llm-guidelines-website focusing on delivering research-backed updates and maintaining documentation quality. Delivered a new literature bibliography entry on LLMs as annotators in software engineering and cleaned up the LaTeX study-types section to reduce clutter, enabling easier future enhancements and better scholarly alignment.
March 2025: Key feature delivered was documentation enhancements to the se-ubt/llm-guidelines-website, focusing on prompt usage clarity (RFC2119 MUST/SHOULD), expanded prompt examples, and guidance on tracking prompt evolution and handling sensitive data. The study-types section was refined to better apply LLMs to qualitative data analysis in software engineering research. No major bugs were fixed this period. Impact: clearer guidance, improved reproducibility, safer data handling, and faster onboarding for researchers and engineers. Technologies/skills demonstrated: documentation best practices, RFC2119 terminology adoption, prompt engineering considerations, data governance, and cross-team collaboration.
March 2025: Key feature delivered was documentation enhancements to the se-ubt/llm-guidelines-website, focusing on prompt usage clarity (RFC2119 MUST/SHOULD), expanded prompt examples, and guidance on tracking prompt evolution and handling sensitive data. The study-types section was refined to better apply LLMs to qualitative data analysis in software engineering research. No major bugs were fixed this period. Impact: clearer guidance, improved reproducibility, safer data handling, and faster onboarding for researchers and engineers. Technologies/skills demonstrated: documentation best practices, RFC2119 terminology adoption, prompt engineering considerations, data governance, and cross-team collaboration.
February 2025 monthly summary: focused on documentation quality and readability for the se-ubt/llm-guidelines-website. Delivered two targeted changes to 'LLMs as Annotators' documentation: fixed typos and refactored the Examples section to remove performance numbers, clarifying methodology and benefits. These changes enhanced readability, reduced risk of misinterpretation, and improved onboarding help and trust in the guidelines. Key outcomes included improved maintainer efficiency and consistency across docs, demonstrated skills in documentation discipline and git-based collaboration.
February 2025 monthly summary: focused on documentation quality and readability for the se-ubt/llm-guidelines-website. Delivered two targeted changes to 'LLMs as Annotators' documentation: fixed typos and refactored the Examples section to remove performance numbers, clarifying methodology and benefits. These changes enhanced readability, reduced risk of misinterpretation, and improved onboarding help and trust in the guidelines. Key outcomes included improved maintainer efficiency and consistency across docs, demonstrated skills in documentation discipline and git-based collaboration.
January 2025 monthly summary for se-ubt/llm-guidelines-website focusing on documenting and evaluating LLM-assisted annotation in software engineering research. Delivered substantial enhancements to the LLM Annotation Research Documentation and Evaluation section, expanded references, evaluation setups, model comparisons, performance summaries, and readability improvements across literature and related sections. Reorganized content into itemized advantages and challenges, clarified performance assertions (including zero-shot considerations for GPT-3.5), and expanded discussion on limitations and resource needs. Included broader coverage of ChatGPT performance in social computing tasks, reliability in text annotation, and cost/performance considerations for GPT-4. Implemented refinements in the Random Forest verifier description within the LLM collaborative annotation framework for clarity. Addressed presentation quality by fixing LaTeX symbol rendering (Spearman rho) to ensure accurate display without changing functionality. Overall, these changes improve onboarding, research traceability, and decision support for researchers evaluating LLM-based annotation workflows.
January 2025 monthly summary for se-ubt/llm-guidelines-website focusing on documenting and evaluating LLM-assisted annotation in software engineering research. Delivered substantial enhancements to the LLM Annotation Research Documentation and Evaluation section, expanded references, evaluation setups, model comparisons, performance summaries, and readability improvements across literature and related sections. Reorganized content into itemized advantages and challenges, clarified performance assertions (including zero-shot considerations for GPT-3.5), and expanded discussion on limitations and resource needs. Included broader coverage of ChatGPT performance in social computing tasks, reliability in text annotation, and cost/performance considerations for GPT-4. Implemented refinements in the Random Forest verifier description within the LLM collaborative annotation framework for clarity. Addressed presentation quality by fixing LaTeX symbol rendering (Spearman rho) to ensure accurate display without changing functionality. Overall, these changes improve onboarding, research traceability, and decision support for researchers evaluating LLM-based annotation workflows.
Overview of all repositories you've contributed to across your timeline