Prompt In Context Learning Save

Awesome resources for in-context learning and prompt engineering: Mastery of the LLMs such as ChatGPT, GPT-3, and FlanT5, with up-to-date and cutting-edge updates.

Project README

Typing SVG

An Open-Source Engineering Guide for Prompt-in-context-learning from EgoAlpha Lab.

📝 Papers | ⚡️ Playground | 🛠 Prompt Engineering | 🌍 ChatGPT Prompt⛳ LLMs Usage Guide

version Awesome

⭐️ Shining ⭐️: This is fresh, daily-updated resources for in-context learning and prompt engineering. As Artificial General Intelligence (AGI) is approaching, let’s take action and become a super learner so as to position ourselves at the forefront of this exciting era and strive for personal and professional greatness.

The resources include:

🎉Papers🎉: The latest papers about In-Context Learning, Prompt Engineering, Agent, and Foundation Models.

🎉Playground🎉: Large language models(LLMs)that enable prompt experimentation.

🎉Prompt Engineering🎉: Prompt techniques for leveraging large language models.

🎉ChatGPT Prompt🎉: Prompt examples that can be applied in our work and daily lives.

🎉LLMs Usage Guide🎉: The method for quickly getting started with large language models by using LangChain.

In the future, there will likely be two types of people on Earth (perhaps even on Mars, but that's a question for Musk):

  • Those who enhance their abilities through the use of AIGC;
  • Those whose jobs are replaced by AI automation.

💎EgoAlpha: Hello! human👤, are you ready?

Table of Contents

📢 News

☄️ EgoAlpha releases the TrustGPT focuses on reasoning. Trust the GPT with the strongest reasoning abilities for authentic and reliable answers. You can click here or visit the Playgrounds directly to experience it。

👉 Complete history news 👈


📜 Papers

You can directly click on the title to jump to the corresponding PDF link location

Survey

The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)2024.03.21

Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey2024.03.21

ChatGPT Alternative Solutions: Large Language Models Survey2024.03.16

MM1: Methods, Analysis&Insights from Multimodal LLM Pre-training2024.03.14

Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey2024.03.14

Model Parallelism on Distributed Infrastructure: A Literature Review from Theory to LLM Case-Studies2024.03.06

Benchmarking the Text-to-SQL Capability of Large Language Models: A Comprehensive Evaluation2024.03.05

A Comprehensive Survey on Process-Oriented Automatic Text Summarization with Exploration of LLM-Based Methods2024.03.05

Large Language Models for Data Annotation: A Survey2024.02.21

A Survey on Knowledge Distillation of Large Language Models2024.02.20

👉Complete paper list 🔗 for "Survey"👈

Prompt Engineering

Prompt Design

SAMCT: Segment Any CT Allowing Labor-Free Task-Indicator Prompts2024.03.20

AFLoRA: Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models2024.03.20

Few-Shot Class Incremental Learning with Attention-Aware Self-Adaptive Prompt2024.03.14

Unveiling the Generalization Power of Fine-Tuned Large Language Models2024.03.14

Attention Prompt Tuning: Parameter-efficient Adaptation of Pre-trained Models for Spatiotemporal Modeling2024.03.11

VidProM: A Million-scale Real Prompt-Gallery Dataset for Text-to-Video Diffusion Models2024.03.10

Localized Zeroth-Order Prompt Optimization2024.03.05

RIFF: Learning to Rephrase Inputs for Few-shot Fine-tuning of Language Models2024.03.04

Prompt-Driven Dynamic Object-Centric Learning for Single Domain Generalization2024.02.28

Meta-Task Prompting Elicits Embedding from Large Language Models2024.02.28

👉Complete paper list 🔗 for "Prompt Design"👈

Chain of Thought

Visual CoT: Unleashing Chain-of-Thought Reasoning in Multi-Modal Language Models2024.03.25

A Chain-of-Thought Prompting Approach with LLMs for Evaluating Students' Formative Assessment Responses in Science2024.03.21

NavCoT: Boosting LLM-Based Vision-and-Language Navigation via Learning Disentangled Reasoning2024.03.12

ERA-CoT: Improving Chain-of-Thought through Entity Relationship Analysis2024.03.11

Bias-Augmented Consistency Training Reduces Biased Reasoning in Chain-of-Thought2024.03.08

Chain-of-Thought Unfaithfulness as Disguised Accuracy2024.02.22

Chain-of-Thought Reasoning Without Prompting2024.02.15

Chain-of-Table: Evolving Tables in the Reasoning Chain for Table Understanding2024.01.09

A Logically Consistent Chain-of-Thought Approach for Stance Detection2023.12.26

Assessing the Impact of Prompting, Persona, and Chain of Thought Methods on ChatGPT's Arithmetic Capabilities2023.12.22

👉Complete paper list 🔗 for "Chain of Thought"👈

In-context Learning

AFLoRA: Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models2024.03.20

ExploRLLM: Guiding Exploration in Reinforcement Learning with Large Language Models2024.03.14

NavCoT: Boosting LLM-Based Vision-and-Language Navigation via Learning Disentangled Reasoning2024.03.12

Attention Prompt Tuning: Parameter-efficient Adaptation of Pre-trained Models for Spatiotemporal Modeling2024.03.11

Bias-Augmented Consistency Training Reduces Biased Reasoning in Chain-of-Thought2024.03.08

LoRA-SP: Streamlined Partial Parameter Adaptation for Resource-Efficient Fine-Tuning of Large Language Models2024.02.28

Securing Reliability: A Brief Overview on Enhancing In-Context Learning for Foundation Models2024.02.27

GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning2024.02.26

DiffuCOMET: Contextual Commonsense Knowledge Diffusion2024.02.26

Long-Context Language Modeling with Parallel Context Encoding2024.02.26

👉Complete paper list 🔗 for "In-context Learning"👈

Retrieval Augmented Generation

Retrieval-Augmented Generation for AI-Generated Content: A Survey2024.02.29

VerifiNER: Verification-augmented NER via Knowledge-grounded Reasoning with Large Language Models2024.02.28

LLM Augmented LLMs: Expanding Capabilities through Composition2024.01.04

ARES: An Automated Evaluation Framework for Retrieval-Augmented Generation Systems2023.11.16

Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models2023.11.15

From Classification to Generation: Insights into Crosslingual Retrieval Augmented ICL2023.11.11

Optimizing Retrieval-augmented Reader Models via Token Elimination2023.10.20

Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection2023.10.17

Retrieve Anything To Augment Large Language Models2023.10.11

Self-Knowledge Guided Retrieval Augmentation for Large Language Models2023.10.08

👉Complete paper list 🔗 for "Retrieval Augmented Generation"👈

Evaluation & Reliability

ERBench: An Entity-Relationship based Automatically Verifiable Hallucination Benchmark for Large Language Models2024.03.08

Benchmarking the Text-to-SQL Capability of Large Language Models: A Comprehensive Evaluation2024.03.05

Beyond Specialization: Assessing the Capabilities of MLLMs in Age and Gender Estimation2024.03.04

A Cognitive Evaluation Benchmark of Image Reasoning and Description for Large Vision Language Models2024.02.28

Evaluating Very Long-Term Conversational Memory of LLM Agents2024.02.27

Semantic Mirror Jailbreak: Genetic Algorithm Based Jailbreak Prompts Against Open-source LLMs2024.02.21

TofuEval: Evaluating Hallucinations of LLMs on Topic-Focused Dialogue Summarization2024.02.20

How Well Can LLMs Negotiate? NegotiationArena Platform and Analysis2024.02.08

Can Large Language Models Understand Context?2024.02.01

Evaluating Large Language Models for Generalization and Robustness via Data Compression2024.02.01

👉Complete paper list 🔗 for "Evaluation & Reliability"👈

Agent

Bayesian Methods for Trust in Collaborative Multi-Agent Autonomy2024.03.25

AIOS: LLM Agent Operating System2024.03.25

ReAct Meets ActRe: Autonomous Annotation of Agent Trajectories for Contrastive Self-Training2024.03.21

VideoAgent: Long-form Video Understanding with Large Language Model as Agent2024.03.15

SOTOPIA-$\pi$: Interactive Learning of Socially Intelligent Language Agents2024.03.13

DeepSafeMPC: Deep Learning-Based Model Predictive Control for Safe Multi-Agent Reinforcement Learning2024.03.11

OPEx: A Component-Wise Analysis of LLM-Centric Agents in Embodied Instruction Following2024.03.05

Towards General Computer Control: A Multimodal Agent for Red Dead Redemption II as a Case Study2024.03.05

Learning to Use Tools via Cooperative and Interactive Agents2024.03.05

KnowAgent: Knowledge-Augmented Planning for LLM-Based Agents2024.03.05

👉Complete paper list 🔗 for "Agent"👈

Multimodal Prompt

Visual CoT: Unleashing Chain-of-Thought Reasoning in Multi-Modal Language Models2024.03.25

Hierarchical Text-to-Vision Self Supervised Alignment for Improved Histopathology Representation Learning2024.03.21

MyVLM: Personalizing VLMs for User-Specific Queries2024.03.21

MathVerse: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?2024.03.21

PSALM: Pixelwise SegmentAtion with Large Multi-Modal Model2024.03.21

SC-Tune: Unleashing Self-Consistent Referential Comprehension in Large Vision Language Models2024.03.20

The First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models?2024.03.14

3D-VLA: A 3D Vision-Language-Action Generative World Model2024.03.14

UniCode: Learning a Unified Codebook for Multimodal Large Language Models2024.03.14

DeepSeek-VL: Towards Real-World Vision-Language Understanding2024.03.08

👉Complete paper list 🔗 for "Multimodal Prompt"👈

Prompt Application

Comp4D: LLM-Guided Compositional 4D Scene Generation2024.03.25

MathVerse: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?2024.03.21

Enhancing Code Generation Performance of Smaller Models by Distilling the Reasoning Ability of LLMs2024.03.20

Instruction Multi-Constraint Molecular Generation Using a Teacher-Student Large Language Model2024.03.20

Towards Robots That Know When They Need Help: Affordance-Based Uncertainty for Large Language Model Planners2024.03.19

ChartInstruct: Instruction Tuning for Chart Comprehension and Reasoning2024.03.14

Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference2024.03.14

Towards Proactive Interactions for In-Vehicle Conversational Assistants Utilizing Large Language Models2024.03.14

Simple and Scalable Strategies to Continually Pre-train Large Language Models2024.03.13

LG-Traj: LLM Guided Pedestrian Trajectory Prediction2024.03.12

👉Complete paper list 🔗 for "Prompt Application"👈

Foundation Models

DreamLIP: Language-Image Pre-training with Long Captions2024.03.25

Instruction Multi-Constraint Molecular Generation Using a Teacher-Student Large Language Model2024.03.20

VideoMamba: State Space Model for Efficient Video Understanding2024.03.11

Mamba4Rec: Towards Efficient Sequential Recommendation with Selective State Space Models2024.03.06

Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models2024.02.29

LeMo-NADe: Multi-Parameter Neural Architecture Discovery with LLMs2024.02.28

LoRA-SP: Streamlined Partial Parameter Adaptation for Resource-Efficient Fine-Tuning of Large Language Models2024.02.28

GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning2024.02.26

Set the Clock: Temporal Alignment of Pretrained Language Models2024.02.26

Generative Pretrained Hierarchical Transformer for Time Series Forecasting2024.02.26

👉Complete paper list 🔗 for "Foundation Models"👈

👨‍💻 LLM Usage

Large language models (LLMs) are becoming a revolutionary technology that is shaping the development of our era. Developers can create applications that were previously only possible in our imaginations by building LLMs. However, using these LLMs often comes with certain technical barriers, and even at the introductory stage, people may be intimidated by cutting-edge technology: Do you have any questions like the following?

  • How can LLM be built using programming?
  • How can it be used and deployed in your own programs?

💡 If there was a tutorial that could be accessible to all audiences, not just computer science professionals, it would provide detailed and comprehensive guidance to quickly get started and operate in a short amount of time, ultimately achieving the goal of being able to use LLMs flexibly and creatively to build the programs they envision. And now, just for you: the most detailed and comprehensive Langchain beginner's guide, sourced from the official langchain website but with further adjustments to the content, accompanied by the most detailed and annotated code examples, teaching code lines by line and sentence by sentence to all audiences.

Click 👉here👈 to take a quick tour of getting started with LLM.

✉️ Contact

This repo is maintained by EgoAlpha Lab. Questions and discussions are welcome via [email protected].

We are willing to engage in discussions with friends from the academic and industrial communities, and explore the latest developments in prompt engineering and in-context learning together.

🙏 Acknowledgements

Thanks to the PhD students from EgoAlpha Lab and other workers who participated in this repo. We will improve the project in the follow-up period and maintain this community well. We also would like to express our sincere gratitude to the authors of the relevant resources. Your efforts have broadened our horizons and enabled us to perceive a more wonderful world.

Open Source Agenda is not affiliated with "Prompt In Context Learning" Project. README Source: EgoAlpha/prompt-in-context-learning