How to Build an AI Telegram Bot with Python | Step by Step Tutorial
- Hackers Realm
- Nov 16
- 6 min read
In today’s fast-paced digital world, chatbots have become an essential tool for automation, communication, and customer engagement. Among various platforms, Telegram stands out for its developer-friendly API and vast global user base, making it a perfect environment to deploy AI-powered bots. Whether you want to create a personal assistant, automate workflows, or build an interactive chatbot that uses artificial intelligence, Python offers an easy and efficient way to bring your ideas to life.

In this step-by-step tutorial, we’ll walk you through the entire process of building an AI Telegram bot using Python — from setting up your Telegram Bot API to integrating intelligent responses powered by AI models. By the end of this guide, you’ll have a fully functional bot that can understand user input, process data, and respond intelligently — all with minimal coding complexity.
You can watch the video-based tutorial with step by step explanation down below.
Import Modules
!pip install telethonInstall the above package to continue with the rest of the code.
api_id = "<your_api_id>"
api_hash = "<your_api_hash>"Before connecting your bot to Telegram, you need to authenticate your application with the Telegram API. This is done using two essential credentials — api_id and api_hash.
api_id – A unique numerical ID assigned to your application.
api_hash – A secret key used to authenticate your app’s requests securely.
bot_token = "7523836567:AAFBWyA7QkCRr8EreFXzbsY2gHQC1Sg2aYY"
This token acts as your bot’s unique access key that allows your Python script to interact with Telegram’s Bot API — sending messages, receiving updates, and performing automated actions on behalf of your bot.
import nest_asyncio
nest_asyncio.apply()
import logging
import asyncio
from telethon import TelegramClient, events
from openai import OpenAIimport nest_asyncio - The nest_asyncio library allows us to run asynchronous event loops inside environments that already have an active loop, such as Jupyter notebooks or async-based applications.
import logging - The logging module helps us track what’s happening inside our bot — including errors, API calls, or message-handling events.
import asyncio - asyncio is Python’s built-in library for asynchronous programming — it allows the bot to handle multiple tasks concurrently.
from telethon import TelegramClient, events - Telethon is a Python library for interacting with the Telegram API asynchronously.
TelegramClient - the main class used to connect to Telegram using your API ID, API Hash, and bot token.
events - provides decorators and handlers to listen for specific Telegram events, such as new messages.
from openai import OpenAI - This imports the OpenAI client, which allows your bot to communicate with OpenAI’s models (like GPT-4).
Setting Up Logging and Initializing the Clients
After importing all necessary libraries, the next step is to initialize our Telegram and AI clients so the bot can communicate with both Telegram’s API and OpenAI’s language models.
# setup logging
logging.basicConfig(level=logging.INFO)
# create telegram client
client = TelegramClient('bot', api_id, api_hash)
# create ai client
ai_client = OpenAI()The logging module is used to track the activity and behavior of your bot while it runs.
By setting the logging level to INFO, you ensure that all important messages—such as startup confirmations, user interactions, and API responses—are displayed in the console.
Next we initialize the Telegram client using the Telethon library. This is what connects your Python script to the Telegram platform.
'bot' is the session name, used by Telethon to save your session data locally (like login credentials).
api_id and api_hash are your unique Telegram API credentials, which you can obtain from my.telegram.org.
Once the client is created, it acts as the bridge between your bot and Telegram — allowing your program to:
Listen for new messages from users,
Process commands,
Send replies back to the chat.
ai_client = OpenAI() : This line initializes the OpenAI client, which enables your bot to communicate with OpenAI’s GPT models.
Writing the Main Function — Handling Commands and AI Responses
The core functionality of our Telegram bot is defined inside the main() function. This is where we start the bot, listen for user messages, and handle commands or AI-based responses. Let’s break it down step by step:
async def main():
# start the client
await client.start(bot_token=bot_token)
# handler for /start command
@client.on(events.NewMessage(pattern='/start'))
async def start_handler(event):
await event.respond("Hello! I am Hackers Realm AI Bot. How can I assist you today?")
logging.info(f'Start command received from {event.sender_id}')
# handler for /info command
@client.on(events.NewMessage(pattern='/info'))
async def info_handler(event):
await event.respond("This AI Chatbot is created in Python with OpenAI API.")
logging.info(f'Info command received from {event.sender_id}')
# handler for /help command
@client.on(events.NewMessage(pattern='/help'))
async def help_hander(event):
help_text = (
"Here are the commands you can use:\n"
"/start - Start the bot\n"
"/help - Get Help Information\n"
"/info - Get Information about the Bot\n"
)
await event.respond(help_text)
logging.info(f"Help command received from {event.sender_id}")
# keyword based response handler
@client.on(events.NewMessage)
async def keyword_responder(event):
# get the message text
message = event.text.lower()
if message in ['/start', '/help', '/info']:
return
# get response from AI client
response = ai_client.chat.completions.create(
model='gpt-4o-mini',
messages=[
{
'role': 'user', "content": message
}
],
max_tokens=128
)
# get content from response
response = response.choices[0].message.content
if response:
await event.respond(response)
logging.info(f"Message received from {event.sender_id}: {event.text}")
await client.run_until_disconnected()Here, we start the Telegram client using our bot token. This establishes a live connection between our Python script and Telegram’s servers, allowing the bot to send and receive messages in real time.
Command Handlers (/start, /info, /help) - Next, we define event handlers for different commands. These handlers tell the bot how to respond when a user sends a specific command.
The /start command sends a friendly welcome message whenever a user initiates a chat with the bot.
The @client.on(events.NewMessage(pattern='/start')) decorator listens for that specific command and triggers the function.
Similarly, the /info and /help commands are handled in the same way:
The /info command provides a short description of the bot, while the /help command lists all available commands for the user.
Each handler also includes a logging.info() statement to record user interactions for debugging or analytics.
AI-Powered Message Handling -
@client.on(events.NewMessage)
async def keyword_responder(event):
message = event.text.lower()
if message in ['/start', '/help', '/info']:
message = event.text.lower()
if message in ['/start', '/help', '/info']:
returnThis function listens for any incoming message, checks if it’s not a predefined command, and then sends it to the OpenAI API for generating an intelligent response.
Adding AI-Powered Responses -
# get response from AI client
response = ai_client.chat.completions.create(
model='gpt-4o-mini',
messages=[
{
'role': 'user', "content": message
}
],
max_tokens=128
)
# get content from response
response = response.choices[0].message.content
if response:
await event.respond(response)
logging.info(f"Message received from {event.sender_id}: {event.text}")Beyond fixed commands, we want our bot to be smart — able to understand and reply to general user messages. To achieve this, we add a keyword-based AI responder that listens for any new message that isn’t a command.
Each incoming message is sent to the OpenAI API, which generates a relevant, human-like response using an AI model. The bot then sends that AI-generated text back to the user.
This transforms your Telegram bot from a simple command-based assistant into a fully interactive AI chatbot capable of holding dynamic conversations.
The bot then extracts the AI’s reply and sends it back to the user using await event.respond(response). This turns the Telegram bot into an interactive AI chatbot capable of understanding and responding naturally to any user input.
Keeping the Bot Running -
At the end of the function, we include await client.run_until_disconnected() . This ensures the bot remains active and keeps listening for new messages indefinitely — until you stop it manually. It’s what keeps your bot “alive” and responsive around the clock.

Running the Bot with asyncio.run(main())
asyncio.run(main())This line is responsible for starting the entire bot application.
Since our main() function is defined as asynchronous (using the async keyword), it can perform multiple tasks simultaneously — such as listening for new messages, processing AI responses, and sending replies — without freezing or slowing down the program.
Final Thoughts
Building an AI-powered Telegram bot with Python is an exciting and practical way to explore automation, artificial intelligence, and real-world API integration. With just a few lines of code, you can create a bot that not only responds to basic commands but also engages users with intelligent, human-like conversations.
Through this step-by-step guide, you’ve learned how to:
Connect your bot to Telegram using the Bot API
Handle user commands like /start, /info, and /help
Integrate OpenAI’s API for dynamic, AI-driven responses
Keep your bot running asynchronously for smooth performance
From here, you can expand your project even further — add new features, connect it to databases, or deploy it to a cloud server for 24/7 uptime. Whether you’re building a personal assistant, customer service bot, or creative AI companion, this foundation opens up endless possibilities for automation and innovation.
Get the project notebook from here
Thanks for reading the article!!!
Check out more project videos from the YouTube channel Hackers Realm



