Turning a Joke into Innovation: AI Integration in our Daily Task Manager
Recently, I developed a ticket management system to streamline day-to-day operations at my workplace. The system is as simple as you’d expect: users can create tickets with tentative completion dates, and team members can post updates on those tickets. However, today I’m not here to discuss how I built the system. Instead, I want to highlight how I integrated AI to help users generate updates with prompts or even summarize the updates they wish to post.
It all started as a joke. In today’s world, we rely heavily on AI tools like ChatGPT, BART, and Deep.ai to rephrase our writing and ensure grammatical accuracy. It’s no secret that many of us use grammar tools, whether AI-powered or not. At our workplace, quite a few users do the same. This is nothing to be ashamed of, and no one should be concerned about it. In fact, I believe it’s essential to present ourselves and our ideas clearly and polished. After all, while English may be our primary workplace language, it’s not our mother tongue.
Now, back to the joke I mentioned earlier. While going through the updates on the daily task manager, one of our EXCOM members jokingly suggested that it would be great to have AI assist with generating daily task updates. Being the self-proclaimed tech enthusiast I am, I confidently told him it would be a piece of cake—and that's exactly how I made it happen in my project. Fortunately for you, I’m going to explain the exact steps and tools I used in this post.
Since this wasn't an official requirement, I had my limitations—there was no budget allocated for the implementation. While there are many ready-to-use, plug-and-play APIs like Deep.ai and OpenAI, they come with costs. To be fair, they aren’t particularly expensive, but their free tiers are quite limited. Hosting your own large language model (LLM) is another option, but that approach requires significant time and resources. Training and fine-tuning the model, coupled with the cost of GPU usage, can lead to unforeseen expenses. Plus, I simply didn’t have the time for that, considering this whole idea started as a joke, driven by my enthusiasm as a self-proclaimed tech tinkerer.
That’s where Hugging Face became the perfect solution. Hugging Face offers an extensive library of pre-trained models accessible via their web API. It’s a treasure trove for developers like me who need AI functionality without the overhead of training models from scratch. Hugging Face provides models for a wide range of tasks, from text generation and summarization to translation, making it incredibly versatile. Since these models are pre-trained, there’s no need to worry about building, training, or maintaining them. You simply make API calls, and the models do the work.
For my use case, integrating Hugging Face was ideal. Since my project is built with VueJS, connecting to Hugging Face’s API was as simple as making a few API calls. The setup is quick and lightweight, with no need for heavy infrastructure or custom training. Hugging Face allowed me to implement the AI functionality I needed, providing users with the ability to generate or summarize their ticket updates effortlessly—all without the overhead of managing a custom model or paying for more expensive options like OpenAI. In short, Hugging Face provided the perfect balance between flexibility, cost, and ease of use, making it an excellent candidate for my project.
Let's delve into the exciting part—coding the integration of Hugging Face's API into my project. To seamlessly connect to the Hugging Face Web API, I needed a reliable method for handling HTTP requests. Since I was already utilizing axios
for managing REST API interactions within my application, I decided to continue using it for consistency and efficiency. The next crucial step was obtaining an API key from Hugging Face, which is straightforward. Additionally, selecting the right AI model was essential for achieving the desired functionality. After careful consideration, I chose facebook/bart-large-cnn
for its exceptional capabilities in text summarization.
Steps to Get Your Hugging Face API Key
- Sign Up on Hugging Face: If you don't already have an account, visit the Hugging Face website and sign up for a free account.
- Navigate to Settings: Once logged in, click on your profile icon located at the top right corner of the page and select "Settings" from the dropdown menu.
- Access Your API Key: In the Settings menu, find the "Access Tokens" section. Here, you can generate a new API key by clicking the "New token" button. Name your token appropriately and copy the generated key. Keep this key secure, as it will be used to authenticate your API requests.
Why facebook/bart-large-cnn
is a Perfect Choice
Optimized for Summarization: The
facebook/bart-large-cnn
model is specifically fine-tuned for text summarization tasks. This makes it ideal for generating concise updates or summarizing ticket information, which aligns perfectly with my application's needs.Pre-trained and Ready to Use: This model comes pre-trained on extensive datasets, eliminating the need for me to invest time and resources into training a custom model. Hugging Face hosts this model, allowing for immediate integration through their API.
Speed and Efficiency: Despite being a large model,
bart-large-cnn
benefits from Hugging Face’s robust infrastructure, ensuring that API requests are processed quickly. This efficiency is crucial for real-time applications like my ticket management system, where timely responses enhance user experience.Accuracy and Flexibility: The model delivers accurate summaries while maintaining the flexibility to handle diverse types of content. This adaptability is essential for managing the varied updates that users may post, ensuring consistent and reliable output.
Step-by-Step Code Integration and Detailed Explanation
Now, let's explore the code I implemented to integrate the Hugging Face API into my Vue project. I'll break down each part of the function to explain its purpose and the reasoning behind the chosen parameters.
import axios from "axios";
export async function rephraseTextAI(text: string): Promise<string> {
const API_URL = "https://api-inference.huggingface.co/models/facebook/bart-large-cnn";
const API_KEY = import.meta.env.VITE_HUGGINGFACE_API_KEY as string;
try {
const response = await axios.post(
API_URL,
{
inputs: text, // The input text that needs to be summarized or rephrased.
parameters: {
temperature: 0.7, // Controls the randomness of the output. Lower values make the output more deterministic.
num_return_sequences: 1, // Specifies the number of summary options to generate. Here, only one summary is needed.
},
options: { wait_for_model: true }, // Ensures the request waits until the model is loaded and ready to process.
},
{ headers: { Authorization: `Bearer ${API_KEY}` } }, // Authenticates the request using the API key.
);
// Check if the response status is successful.
if (response.status === 200) {
const rephrasedText = response.data[0]?.summary_text; // Extracts the summarized text from the response.
if (rephrasedText) {
return rephrasedText; // Returns the summarized or rephrased text to the caller.
} else {
return Promise.reject(new Error("No rephrased text returned from API.")); // Handles cases where the API doesn't return the expected data.
}
} else {
return Promise.reject(new Error(`Unexpected response status: ${response.status}`)); // Handles unexpected HTTP status codes.
}
} catch (error: any) {
console.error("Error fetching from Hugging Face API:", error); // Logs any errors that occur during the API call.
return Promise.reject(new Error(error.message || "Unknown error occurred while rephrasing text.")); // Propagates a meaningful error message.
}
}
Detailed Explanation of the Code
Importing Axios:
import axios from "axios";
- Purpose: Axios is a promise-based HTTP client that facilitates making API requests. Since I was already using it for other REST API interactions in my application, it made sense to utilize it here for consistency.
Function Definition:
export async function rephraseTextAI(text: string): Promise<string> {
- Purpose: This asynchronous function takes a string input (
text
) and returns a promise that resolves to a rephrased or summarized string. Using TypeScript ensures type safety and better code maintainability.
- Purpose: This asynchronous function takes a string input (
API Endpoint and Key:
const API_URL = "https://api-inference.huggingface.co/models/facebook/bart-large-cnn"; const API_KEY = import.meta.env.VITE_HUGGINGFACE_API_KEY as string;
- API_URL: Specifies the endpoint for the
facebook/bart-large-cnn
model hosted on Hugging Face. - API_KEY: Retrieves the API key from environment variables (
import.meta.env
). Storing the key in environment variables enhances security by keeping sensitive information out of the codebase.
- API_URL: Specifies the endpoint for the
Making the API Request:
const response = await axios.post( API_URL, { inputs: text, parameters: { temperature: 0.7, num_return_sequences: 1, }, options: { wait_for_model: true }, }, { headers: { Authorization: `Bearer ${API_KEY}` } }, );
- Payload:
inputs
: The text to be summarized or rephrased.parameters
:temperature
: Set to0.7
, this parameter controls the randomness of the model's output. A lower temperature (closer to 0) makes the output more deterministic and focused, while a higher temperature increases creativity and variability. I chose0.7
to balance creativity with coherence.num_return_sequences
: Set to1
, this specifies that only one summary should be generated. Since the use case requires a single, concise update, generating multiple summaries was unnecessary.
options
:wait_for_model
: Set totrue
, this option ensures that the request waits until the model is fully loaded and ready to process the input. This is particularly useful if the model is not already active, preventing failed or incomplete requests.
- Headers:
Authorization
: Uses the Bearer token (API_KEY
) to authenticate the request, ensuring that only authorized users can access the API.
- Payload:
Handling the Response:
if (response.status === 200) { const rephrasedText = response.data[0]?.summary_text; if (rephrasedText) { return rephrasedText; } else { return Promise.reject(new Error("No rephrased text returned from API.")); } } else { return Promise.reject(new Error(`Unexpected response status: ${response.status}`)); }
- Success Check: Verifies that the HTTP status code is
200
, indicating a successful request. - Extracting Summary: Retrieves the
summary_text
from the first element of the response data array. This is the summarized version of the input text. - Error Handling:
- If
summary_text
is not present, it rejects the promise with a meaningful error message. - If the status code is not
200
, it rejects the promise with an error indicating an unexpected response status.
- If
- Success Check: Verifies that the HTTP status code is
Catch Block for Errors:
} catch (error: any) { console.error("Error fetching from Hugging Face API:", error); return Promise.reject(new Error(error.message || "Unknown error occurred while rephrasing text.")); }
- Purpose: Catches any errors that occur during the API call, such as network issues or unexpected input.
- Logging: Logs the error to the console for debugging purposes.
- Error Propagation: Rejects the promise with a descriptive error message, ensuring that the calling function is aware of what went wrong.
Additional Parameters and Customization
While the current implementation uses specific parameters like temperature
and num_return_sequences
, Hugging Face's API offers a variety of other parameters that can be adjusted to fine-tune the results:
- Max Length (
max_length
): Specifies the maximum length of the generated summary. This can help control the verbosity of the output. - Min Length (
min_length
): Ensures that the summary is at least a certain number of tokens, preventing overly terse outputs. - Top-K Sampling (
top_k
): Limits the number of highest probability vocabulary tokens to keep for top-k sampling, adding another layer of control over the randomness. - Top-P Sampling (
top_p
): Implements nucleus sampling by considering the smallest set of tokens whose cumulative probability exceedstop_p
.
These parameters allow for extensive customization based on the end user's needs. For instance, adjusting temperature
and top_p
can make the summaries more creative or more focused, depending on the desired outcome. However, in this project, I kept the parameters simple to ensure reliability and consistency for the end users.
Final Integration Steps
- API Request Setup: Using
axios
, I configured the POST request with the necessary headers and payload, including the input text and desired parameters for our project. - Model Selection: Specified
facebook/bart-large-cnn
as the model to leverage its summarization capabilities. - Processing User Input: When users input their updates, the application sends this text to the Hugging Face API through the
rephraseTextAI
function. - Displaying Results: The API's response, which contains the summarized or rephrased text, is then displayed back to the user within the ticket manager interface.
Now that everything is set up, let’s take a look at the magic once it’s in action. Here’s a screen capture showcasing two scenarios:
- The user provides a prompt, and the AI generates an update based on that input.
- The user writes their own update, and the AI summarizes the provided text.
Conclusion
In conclusion, we've successfully integrated AI into our ticket management system to enhance user experience. By using the Hugging Face API and the facebook/bart-large-cnn
model, we've made it easier for users to generate updates and summarize their contributions.
We started by obtaining an API key from Hugging Face and then set up a function to handle the interaction with the API. The use of axios
for HTTP requests helped maintain consistency in our codebase. Through our careful selection of parameters, like temperature and the number of return sequences, we ensured that the AI provides relevant and concise outputs tailored to user needs.
This integration not only simplifies the process of writing updates but also enhances users' ability to communicate effectively. With AI taking care of these tasks, our team can concentrate more on critical work while still ensuring that our communications remain clear and coherent. Ultimately, this project showcases the potential of AI as a valuable tool in daily operations, streamlining our workflows and boosting efficiency—even if it all began as a joke. I hope this post encourages you to consider similar solutions in your own projects! Until next time, ciao!