+91 9873530045
admin@learnwithfrahimcom
Mon - Sat : 09 AM - 09 PM

Step 3 - Build the Streamlit UI & Call Vertex AI

Step 3 — Streamlit UI + First Vertex AI Call (Chatbot Course)


Create a simple chat interface in Streamlit and fetch your first response from Gemini on Vertex AI.

Prerequisites (from Steps 1–2):
  • GCP project created, billing enabled, Vertex AI API enabled.
  • Local auth done via gcloud auth application-default login (ADC) or you have a service-account JSON key.
  • PyCharm installed (or any editor) and Python 3.10+ available.

1) Create Project Folder & Virtual Environment

Windows (PowerShell / CMD)

mkdir C:\VertexChatbot
cd C:\VertexChatbot
py -m venv .venv
.venv\Scripts\activate
python -m pip install --upgrade pip
pip install streamlit google-cloud-aiplatform python-dotenv

macOS / Linux (Terminal)

mkdir -p ~/VertexChatbot
cd ~/VertexChatbot
python3 -m venv .venv
source .venv/bin/activate
python -m pip install --upgrade pip
pip install streamlit google-cloud-aiplatform python-dotenv

Create a requirements.txt (helps later for deployment):

streamlit
google-cloud-aiplatform
python-dotenv
If you use PyCharm: File ▸ New Project → point to this folder → choose Existing Virtualenv (.venv) so PyCharm uses the same environment.

2) (Optional) Environment File for Config

Create a .env at project root to avoid hardcoding values:

GCP_PROJECT_ID=YOUR_PROJECT_ID
GCP_LOCATION=us-central1
# Optional if you prefer key-based auth (for CI/servers; ADC recommended locally)
# GOOGLE_APPLICATION_CREDENTIALS=C:/VertexChatbot/keys/sa.json
What this does: Lets your code read settings from environment variables. ADC will use your logged-in account; the key path is only needed if you choose key-based auth.

3) Create app.py — Minimal Chatbot UI + Vertex AI Call

Copy this file as app.py in your project folder. Every important line is commented.

# app.py
# Streamlit UI + Vertex AI (Gemini) first call.
# -------------------------------------------------
# This app:
# 1) Sets up a chat-style UI in Streamlit.
# 2) Initializes Vertex AI with your project/region.
# 3) Sends the user's prompt to Gemini and shows the response.

import os
import streamlit as st
from dotenv import load_dotenv

import vertexai
from vertexai.generative_models import GenerativeModel  # Gemini client

# ---- Page & session setup (Streamlit UI) ----
st.set_page_config(page_title="Vertex AI Chatbot", page_icon="🤖", layout="centered")
st.title("🤖 Vertex AI Chatbot (Step 3)")

# Keep chat history in session so it persists across reruns
if "messages" not in st.session_state:
    st.session_state.messages = []  # list of tuples: (role, content)

# ---- Load environment variables (.env) ----
load_dotenv()  # reads GCP_PROJECT_ID, GCP_LOCATION, etc.

PROJECT_ID = os.getenv("GCP_PROJECT_ID", "YOUR_PROJECT_ID")
LOCATION = os.getenv("GCP_LOCATION", "us-central1")

# ---- Initialize Vertex AI client ----
# Uses Application Default Credentials (ADC) set by 'gcloud auth application-default login'
vertexai.init(project=PROJECT_ID, location=LOCATION)

# ---- Display existing messages (chat history) ----
for role, content in st.session_state.messages:
    with st.chat_message("user" if role == "user" else "assistant"):
        st.markdown(content)

# ---- Chat input (bottom text box) ----
prompt = st.chat_input("Type your question...")

# ---- Handle new user message ----
if prompt:
    # 1) Save & show the user's message
    st.session_state.messages.append(("user", prompt))
    with st.chat_message("user"):
        st.markdown(prompt)

    # 2) Call Vertex AI (Gemini) to get a response
    try:
        model = GenerativeModel("gemini-1.5-pro")  # prebuilt LLM
        with st.chat_message("assistant"):
            with st.spinner("Thinking with Gemini..."):
                response = model.generate_content(prompt)
                answer = response.text
                st.markdown(answer)
        # 3) Store assistant reply in history
        st.session_state.messages.append(("assistant", answer))
    except Exception as e:
        # Show error nicely in the UI
        with st.chat_message("assistant"):
            st.error(f"Vertex AI error: {e}")
[Screenshot placeholder: PyCharm project tree showing app.py, requirements.txt, .env]
Why this structure? app.py is your main app. .env holds configs. requirements.txt pins dependencies for deployment.

4) Run the App Locally

From your activated virtualenv terminal inside the project folder:

streamlit run app.py

Browser will open at http://localhost:8501. If port is busy, pick another:

streamlit run app.py --server.port 8502
Expected terminal output snippet: You can now view your Streamlit app in your browser. Local URL: http://localhost:8501 Network URL: http://192.168.x.x:8501
[Screenshot placeholder: Streamlit page showing title, chat input, and a model response]

5) Try a Prompt

Type something like:

Summarize the difference between lists, tuples, and sets in Python.

You should see a helpful response from Gemini. The message is also saved to chat history.

If you see a response, congrats — your local UI ↔ Vertex AI integration works!

Troubleshooting

  • Permission or 401/403 errors: Re-run gcloud auth application-default login. Ensure the same Google account has access to your project.
  • Wrong project: In terminal, run gcloud config list. Update GCP_PROJECT_ID in .env to match.
  • API not enabled: In GCP Console → APIs & Services ▸ Enabled APIs → ensure Vertex AI API is enabled.
  • Region mismatch: Keep LOCATION=us-central1 unless you know your model is available elsewhere.
  • Port already in use: streamlit run app.py --server.port 8502.

Optional: OpenAI Fallback (if you prefer)

If you want to test with OpenAI instead, install the SDK and set your API key:

pip install openai
# In your terminal (temporary) or .env (persistent)
# OPENAI_API_KEY=sk-...

Minimal replacement code (inside the try: block):

# from openai import OpenAI
# client = OpenAI()
# completion = client.chat.completions.create(
#     model="gpt-4o-mini",
#     messages=[{"role":"user","content": prompt}]
# )
# answer = completion.choices[0].message.content
Stick to Vertex AI for a fully Google-native stack; this is just an alternative for testing.

What’s Next?

In Step 4 we’ll enhance the UI and behavior:

  • Persist chat history more robustly (st.session_state patterns).
  • Add model controls (temperature, max tokens).
  • Improve formatting and add a “Clear chat” button.
[Screenshot placeholder: Chat UI with multiple turns, clear button, and settings sidebar]
Streamlit + Vertex AI Chatbot Course — Step 3 of 10