How We Built a Scalable Web Application Using Python and Django: Lessons from Food Delivery Platforms Like Deliveroo


shape
shape
shape
shape
shape
shape
shape
shape
image

Introduction

Understanding customer feedback is essential for any business that wants to improve its products and services. However, when feedback volumes are high, manually categorizing and analyzing each piece of feedback becomes impractical. That’s where NLP (Natural Language Processing) and machine learning come in.

For this project, we developed a customer feedback analysis tool using Hugging Face Transformers, a leading NLP library that provides access to pre-trained models for various language tasks. This tool is designed to provide real-time insights into customer sentiment, topic categorization, and summary generation for feedback. We envision this tool being especially beneficial for companies like Tesco, which receive large volumes of customer feedback from both online and in-store sources.

In this post, we’ll walk you through our approach, why we chose Hugging Face, the technical challenges we faced, and the results we achieved. If you’re interested in how NLP can enhance your understanding of customer sentiment, keep reading!

Why We Chose Hugging Face for NLP

Hugging Face has become a game-changer in the field of NLP, offering state-of-the-art pre-trained models through the Transformers library. Here’s why Hugging Face was the ideal choice for this project:

  • Access to State-of-the-Art Models: Hugging Face provides models like BERT, RoBERTa, and GPT-2, which are top performers for tasks like sentiment analysis, text classification, and summarization.
  • Easy-to-Use API: With Hugging Face’s transformers library, we can easily load, fine-tune, and deploy models, saving us significant development time.
  • Customization: Although pre-trained models work well out of the box, Hugging Face also supports fine-tuning, allowing us to adapt the models to our specific data and business needs.
  • Community and Resources: Hugging Face has an active community and extensive documentation, making it easy to find support, resources, and even additional datasets for training.

For this project, we specifically chose DistilBERT, a smaller, faster version of BERT that offers nearly the same performance but requires fewer resources. This made it ideal for real-time analysis of customer feedback.

Project Overview: Building a Customer Feedback Analysis Tool

Our tool is designed to automatically analyze and categorize customer feedback in real-time, which would be invaluable for a company like Tesco. Here’s how it works:

  1. Feedback Ingestion: Customer feedback is collected from multiple sources, such as surveys, social media, and customer service interactions.
  2. Preprocessing and Sentiment Analysis: The text data is cleaned and tokenized, and then analyzed for sentiment using Hugging Face’s pre-trained DistilBERT model. This helps categorize feedback as positive, negative, or neutral.
  3. Topic Classification: We fine-tuned the DistilBERT model to recognize specific topics relevant to Tesco, such as “Product Quality,” “Pricing,” “Customer Service,” and “Delivery.”
  4. Summarization and Insights Generation: For lengthy feedback, we used a pre-trained T5 model from Hugging Face for summarization, allowing Tesco to get quick insights from long responses.
  5. Dashboard for Visualization: The classified and summarized data is sent to a dashboard where the client can view insights, analyze trends, and track sentiment over time.

Technical Implementation

Here’s a step-by-step look at how we implemented this solution using Hugging Face Transformers.

Step 1: Data Collection and Preprocessing

Customer feedback data was collected from various sources in both structured and unstructured formats. We used Python’s pandas library to clean and preprocess the text data:

  • Tokenization: We tokenized the text using Hugging Face’s AutoTokenizer for DistilBERT, ensuring the text format was compatible with our model.
  • Stop Word Removal and Lemmatization: Although the model handles raw text well, we removed some common stop words to improve processing efficiency and cleaned up unnecessary characters.
Step 2: Sentiment Analysis

To perform sentiment analysis, we used Hugging Face’s pipeline API:

from transformers import pipeline # Load sentiment-analysis pipeline sentiment_analyzer = pipeline("sentiment-analysis", model="distilbert-base-uncased-finetuned-sst-2-english") # Analyze sentiment of a feedback example feedback = "The product quality has declined, and customer service was unhelpful." result = sentiment_analyzer(feedback) print(result)  # Output: [{"label": "NEGATIVE", "score": 0.99}]

This pipeline allowed us to instantly categorize feedback as positive, negative, or neutral, providing Tesco with a quick overview of customer satisfaction.

Step 3: Fine-Tuning for Topic Classification

We fine-tuned DistilBERT on a labeled dataset of customer feedback to classify feedback into specific topics. Fine-tuning allowed us to recognize categories such as "Product Quality," "Customer Service," and "Pricing."

from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments # Load DistilBERT model for classification model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=4) # Define training arguments and Trainer training_args = TrainingArguments( output_dir='./results', evaluation_strategy="epoch", learning_rate=2e-5, per_device_train_batch_size=8, per_device_eval_batch_size=8, num_train_epochs=3, weight_decay=0.01, ) trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset ) # Train the model trainer.train()

After training, the model could accurately classify feedback into categories like “Product Quality,” “Customer Service,” and “Delivery Issues.”

Step 4: Summarization with T5

For longer feedback, we implemented Hugging Face’s T5 model for text summarization. This provided Tesco with concise summaries of lengthy feedback, making it easier to understand the key points quickly.

from transformers import T5Tokenizer, T5ForConditionalGeneration # Load T5 model and tokenizer for summarization tokenizer = T5Tokenizer.from_pretrained("t5-small") model = T5ForConditionalGeneration.from_pretrained("t5-small") # Summarize long feedback text = "The delivery was late, the packaging was damaged, and the food was cold..." inputs = tokenizer("summarize: " + text, return_tensors="pt", max_length=512, truncation=True) summary_ids = model.generate(inputs.input_ids, max_length=50, min_length=25, length_penalty=2.0, num_beams=4, early_stopping=True) summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(summary)

This provided concise, easy-to-read summaries that highlighted the main issues or praises from each piece of feedback.

Step 5: Building a Dashboard for Visualization

Finally, we sent the categorized and summarized data to a Streamlit dashboard. This allowed Tesco’s customer experience team to:

  • View real-time insights on customer sentiment.
  • Filter feedback by topic or sentiment.
  • Track changes in sentiment over time.

Challenges and Solutions

Working with NLP models at scale comes with unique challenges. Here’s how we addressed some of the main issues:

Handling Diverse Language and Expressions

Challenge: Customer feedback often contains slang, emojis, and varying tones, making it difficult for models to interpret accurately.

Solution: We included data augmentation techniques to train the model on a variety of language styles, which improved its robustness.

Fine-Tuning for High Accuracy

Challenge: Pre-trained models are powerful but need customization to achieve high accuracy for specific use cases.

Solution: By fine-tuning DistilBERT on a dataset labeled with Tesco-specific topics, we significantly improved the model’s classification accuracy for our needs.

Managing Model Latency

Challenge: Real-time sentiment analysis requires quick responses, but NLP models can be computationally intensive.

Solution: We deployed the models using Docker and optimized them with AWS Elastic Beanstalk to handle high request volumes without latency issues.

Results and Business Impact

The Hugging Face-powered feedback analysis tool delivered significant improvements:

  • Enhanced Customer Insights: Tesco gained a clear view of customer sentiment across different feedback channels.
  • Improved Efficiency: The automated system saved hours of manual work by instantly categorizing and summarizing feedback.
  • Scalability: The tool can handle thousands of feedback entries per day, scaling seamlessly as Tesco’s customer base grows.

Conclusion

Using Hugging Face Transformers, we built a powerful NLP solution that automates customer feedback analysis, providing actionable insights in real-time. This project demonstrates the power of NLP in helping businesses like Tesco quickly understand and respond to customer needs.

If you're interested in exploring how Hugging Face and NLP could help your business, reach out to us! We’d be happy to discuss how our expertise can provide valuable insights from your customer data.