Create a simple chat interface in Streamlit and fetch your first response from Gemini on Vertex AI.
gcloud auth application-default login (ADC) or you have a service-account JSON key.mkdir C:\VertexChatbot cd C:\VertexChatbot py -m venv .venv .venv\Scripts\activate python -m pip install --upgrade pip pip install streamlit google-cloud-aiplatform python-dotenv
mkdir -p ~/VertexChatbot cd ~/VertexChatbot python3 -m venv .venv source .venv/bin/activate python -m pip install --upgrade pip pip install streamlit google-cloud-aiplatform python-dotenv
Create a requirements.txt (helps later for deployment):
streamlit google-cloud-aiplatform python-dotenv
.venv) so PyCharm uses the same environment.Create a .env at project root to avoid hardcoding values:
GCP_PROJECT_ID=YOUR_PROJECT_ID GCP_LOCATION=us-central1 # Optional if you prefer key-based auth (for CI/servers; ADC recommended locally) # GOOGLE_APPLICATION_CREDENTIALS=C:/VertexChatbot/keys/sa.json
app.py — Minimal Chatbot UI + Vertex AI CallCopy this file as app.py in your project folder. Every important line is commented.
# app.py
# Streamlit UI + Vertex AI (Gemini) first call.
# -------------------------------------------------
# This app:
# 1) Sets up a chat-style UI in Streamlit.
# 2) Initializes Vertex AI with your project/region.
# 3) Sends the user's prompt to Gemini and shows the response.
import os
import streamlit as st
from dotenv import load_dotenv
import vertexai
from vertexai.generative_models import GenerativeModel # Gemini client
# ---- Page & session setup (Streamlit UI) ----
st.set_page_config(page_title="Vertex AI Chatbot", page_icon="🤖", layout="centered")
st.title("🤖 Vertex AI Chatbot (Step 3)")
# Keep chat history in session so it persists across reruns
if "messages" not in st.session_state:
st.session_state.messages = [] # list of tuples: (role, content)
# ---- Load environment variables (.env) ----
load_dotenv() # reads GCP_PROJECT_ID, GCP_LOCATION, etc.
PROJECT_ID = os.getenv("GCP_PROJECT_ID", "YOUR_PROJECT_ID")
LOCATION = os.getenv("GCP_LOCATION", "us-central1")
# ---- Initialize Vertex AI client ----
# Uses Application Default Credentials (ADC) set by 'gcloud auth application-default login'
vertexai.init(project=PROJECT_ID, location=LOCATION)
# ---- Display existing messages (chat history) ----
for role, content in st.session_state.messages:
with st.chat_message("user" if role == "user" else "assistant"):
st.markdown(content)
# ---- Chat input (bottom text box) ----
prompt = st.chat_input("Type your question...")
# ---- Handle new user message ----
if prompt:
# 1) Save & show the user's message
st.session_state.messages.append(("user", prompt))
with st.chat_message("user"):
st.markdown(prompt)
# 2) Call Vertex AI (Gemini) to get a response
try:
model = GenerativeModel("gemini-1.5-pro") # prebuilt LLM
with st.chat_message("assistant"):
with st.spinner("Thinking with Gemini..."):
response = model.generate_content(prompt)
answer = response.text
st.markdown(answer)
# 3) Store assistant reply in history
st.session_state.messages.append(("assistant", answer))
except Exception as e:
# Show error nicely in the UI
with st.chat_message("assistant"):
st.error(f"Vertex AI error: {e}")
app.py, requirements.txt, .env]app.py is your main app. .env holds configs. requirements.txt pins dependencies for deployment.From your activated virtualenv terminal inside the project folder:
streamlit run app.py
Browser will open at http://localhost:8501. If port is busy, pick another:
streamlit run app.py --server.port 8502
Type something like:
Summarize the difference between lists, tuples, and sets in Python.
You should see a helpful response from Gemini. The message is also saved to chat history.
gcloud auth application-default login. Ensure the same Google account has access to your project.gcloud config list. Update GCP_PROJECT_ID in .env to match.LOCATION=us-central1 unless you know your model is available elsewhere.streamlit run app.py --server.port 8502.If you want to test with OpenAI instead, install the SDK and set your API key:
pip install openai
# In your terminal (temporary) or .env (persistent) # OPENAI_API_KEY=sk-...
Minimal replacement code (inside the try: block):
# from openai import OpenAI
# client = OpenAI()
# completion = client.chat.completions.create(
# model="gpt-4o-mini",
# messages=[{"role":"user","content": prompt}]
# )
# answer = completion.choices[0].message.content
In Step 4 we’ll enhance the UI and behavior:
st.session_state patterns).