AI agents in communication with applicants and students
Artificial Intelligence
Students Teachers EmployeesRecently, there has been a growing interest in deploying an AI agent or chatbot that would automatically respond to applicants or students (e.g., regarding admissions, study agenda, fees, deadlines, credit recognition, etc.). The idea is appealing: faster responses, reduced workload for departments, availability 24/7. However, it is important to understand that automatic communication "on behalf of the university" carries more weight than internal AI use. When an employee uses AI as an assistant and verifies the output, it is generally safe. But when AI responds directly to applicants or students, its answer may be perceived as the official position of the university – even if that was not the intention.
External communication is more sensitive than internal AI use
Internal use (recommended)
AI can help with summarizing documents, drafting responses, checking tone, translations, preparing outlines, analyzing materials, etc. The key point is that the employee reviews and supplements the output, and responsibility remains with a human.
External responses (sensitive)
Once AI responds directly to applicants and students, the risks increase that:
- The answer will be inaccurate or outdated
- The student will take it as a binding piece of information
- Confusion, complaints or reputational damage may occur
- Personal data may be handled inappropriately
The most common risks of AI agents or chatbots responding to applicants or students
AI can confidently answer incorrectly
Modern AI can sound very convincing-even when it is uncertain or invents an answer. In practice, this means it may:
- Provide incorrect dates, requirements, or fees
- Mix up rules between programs or academic years
- Misinterpret internal regulations
- Create the impression that an exception exists when it actually does not
Consequence: the applicant or student bases their decisions on information that is incorrect.
Official position vs. helpful information
If a response comes from the university website, school chat, or email, users typically understand it as official. It is not enough to rely on the assumption that it was "just a chatbot."
Consequence: the university then has to explain the situation, repair the damage and unify interpretation.
Without high-quality and maintained sources, AI struggles
AI needs clearly defined sources and regularly updated information - so that it can respond correctly, it must have:
- An accurate source of truth (current regulations, guides, FAQs, study rules)
- A clear structure (what applies to whom - program, year, mode of study)
- Regular maintenance (deadline changes, new exceptions, updated fees)
If the sources are incomplete, scattered in emails or outdated, the chatbot will inevitably:
- Respond "according to its best guess"
- Mix old and new information
- Return generic phrases where precision is needed
Personal data and sensitive situations
Questions from applicants and students often contain:
- Birth numbers, addresses, personal identifiers
- Information about medical limitations or social circumstances
- Study results, disciplinary issues
Automatic processing of such data requires extreme caution: minimal data collection, secure storage, clear purpose of processing, controlled access, and auditability.
Responsibility, traceability and escalation
With human responses, we know who replied and why. With AI, it is necessary to address:
- Who "owns" the content and who is responsible for errors
- Whether and how conversations are stored (and who can access them)
- How AI recognizes when to forward a query to a human (e.g., individual assessment, exceptions, appeals)
Tone, fairness and accessibility
AI may:
- Use an inappropriate tone (too casual or, conversely, too cold)
- Give different answers to two people asking the same question (inconsistency)
- Struggle with certain language formulations
- Fail in queries requiring empathy
AI as an assistant, not a spokesperson
To protect applicants and students from confusion while still benefiting from AI, the following principle is effective:
AI helps internally - humans respond externally
An employee may use Microsoft Copilot Chat with enterprise data protection to draft responses, simplify text, translate or check clarity. The final response is always reviewed by a human and based on verified sources (website, internal regulations, decisions).
If automation, then safe automation
If the goal is to speed up communication and reduce workload, a combination of safe and quickly implementable steps often helps:
- Map the twenty most common questions (admissions, fees, confirmations, schedule)
- Create or update one clear FAQ page for each area
- Prepare approved response templates (short and with links)
- Use AI internally to speed up writing and unify tone (with human review)
- Set up a simple navigation page that routes the question correctly
Conclusion
AI is an excellent assistant for employees and can significantly speed up work. However, in communication with applicants and students it is crucial to follow the principle: AI may help prepare the response, but the official message must be verified and human-controlled. This protects students, applicants, and the university’s reputation.