All authors: Please RSVP if you haven’t. Thanks!
Poster Presentation
- Poster size: 24 x 36 in
- We have two poster sessionss, one on each day, from 12:00 PM to 1:30 PM.
- We will have the poster easels and boards setup and ready to use before the workshop starts. When you arrive, feel free to pick en available one, and get your poster ready.
- Your poster will be there for the whole workshop, which will give the partcipanents more time to get familiar of your work.
For CS students, you may use the poster printing service by following instructions on this page.
Lightning Talks
- When: There will be two sessions. Please find your assigned session and order below.
- Length: each talk three minutes.
- Slides:
- Must be a single PDF file. File name: “{day1 | day2}_order_LastName.pdf”. Examples: day1_02_smith.pdf, day2_13_Ali.pdf.
- Before Thursday Oct. 17, 11:59 PM, upload the file via this link or send an email with the PDF file to LLM_Wor.x9m3w41rnmy6cwco@u.box.com
- Slide contents: you may reuse/re-organize your poster content (if there’s one). Or create anew.
- You do NOT have to copy your slides to the presenting PC. Session chairs will load everyone’s slides to the presenting PC ahead of time.
- Before your session, see your session chair (see below) and introduce yourself.
- 🥇 Awards. All workshop participants are invited: use links below to vote for your favorite talks. Out of the two sessions, the talks receiving the most votes will receive the Audience’s Choice Awards, which the organizers will hand out at the end of the workshop.
- Have fun!
First session of lightning talk: Oct 19th (tentative: 4.15p-5.15p)
Student session chair: Elizabeth Palmieri
Presentation Order | Title |
---|---|
01 | Fast and Accurate Language Model Decoding via Parallel Token Processing |
02 | Interpretable Vision-Language-Action Models via Skill Diffusion Policies (Audience’s 🥇) |
03 | Investigating Correlations Between Computational Mechanisms of LLMs and Their Performance on Linguistic Test Suites |
04 | Assessing Performance and Reliability in Abstractive Text Summarization with LLMs |
05 | Assessing the Impact of Textual Diversity on Large Language Model Reliability |
06 | The Impact of Data Frequency on SAXBPE Tokenizer in Chronos for Time Series Tokenization |
07 | LLMs Meet Palliative Care: Assessing Patient-Provider Communication in Clinics |
08 | Explaining Graph Neural Networks with Large Language Models: A Counterfactual Perspective for Molecular Property Prediction |
09 | Transformers as Interacting Particle Systems: A Statistical Mechanics Perspective |
10 | Drive the image generation: Projected Stable Diffusion |
11 | Maximizing the Capabilities of Tiny Speech Foundation Models in a Privacy Preserving Manner |
12 | Leveraging Librarians’ Expertise: Integrating AI Tools and LLMs |
13 | Sentiment Analysis on Autism Content in College-Level Textbooks |
14 | An Information Theoretic Approach to Operationalize Right to Data Protection |
🥇 Vote for your favorite talks – Session 1 (UVA logins required)
https://forms.office.com/r/kqeFF2M4qY
Second session of lightning talk: Oct 20th (tentative: 2.45p-3.45p)
Student session chair: Afsara Benazir (hys4qm)
Presentation Order | Title |
---|---|
01 | Two Tales of Persona in LLMs: Role-Playing and Personalization |
02 | Improving Large Language Model Performance on Aspect Based Text Summarization |
03 | Comparing Learning Paradigms in Large Language Models with Intrinsic Dimension Analysis |
04 | LGSU: A PROACTIVE CONVERSATIONAL AGENT FRAMEWORK FOR MENTAL HEALTH DIFFERENTIAL DIAGNOSIS |
05 | Are Language Models Actually Useful for Time Series Forecasting? |
06 | Observing the Effect of RAG models on Student Learning in Undergraduate Data Science Coursework |
07 | KG-CF: Knowledge Graph Completion with Context Filtering |
08 | Data-adaptive Differentially Private Prompt Synthesis for In-Context Learning |
09 | Integrating LLMs and Time Series Foundation Models for Earthquakes and Hydrology (Audience’s 🥇) |
10 | Constrained Synthesis with Projected Diffusion Models |
11 | Studying the Privacy of LLM Agents |
12 | Low-rank Fine-tuning: A Fairness Perspective |
13 | Speculative Diffusion Decoding: Accelerating Language Generation through Diffusion (Audience’s 🥇) |
14 | Crafting Conversational Companions: Exploring Older Adult’s Perception and Use of LLM-Powered Voice Assistants with Induced Personalities |
🥇 Vote for your favorite talks – Session 2 (UVA logins required)
https://forms.office.com/r/vQGU9SvLBx
Return to the main page.