Employees and job applicants alike must now demonstrate proficiency with AI. Even if a specific job description doesn’t mention AI skills, the expectation is there. Job applicants are expected to describe how they’ve used AI to enhance their productivity. Existing employees risk losing their jobs or endanger their chances at promotion if they can’t prove their position requires a human or that they can use AI to make themselves even more productive.
Being proactive can make you valuable. So, if you aren’t proficient already with AI, where should you start?
First, have a firm grasp of the terminology and know the most commonly used AI tools or the ones that might be in the hype cycle. Second, have a clear sense of ethical AI use and disclosure. Although AI is a productivity tool in some situations, it can be a way to fake effort in others. You need to talk about the difference between being productive and shirking your job responsibilities. Third, prove that you have experience using it for more than writing or image generation. Fourth, know what AI can do well and where it loses its value. If you can do this, you’ll stand out among your peers.
What AI Skills Do Job Applicants and Employees Need Now?
Workers today are expected to show AI proficiency by using tools ethically, disclosing usage when appropriate, applying AI beyond writing and image generation and integrating it into workflows while keeping final decisions and accountability with humans.
Be Conversant in AI Ethics, Disclosure and Authenticity
The standards of ethical or human-centered AI may feel like a moving target because they can vary depending on industry, application and the tools themselves. There isn’t a clear answer to what behavior is appropriate, so it pays to have done some reading and analysis of your position.
Stay current by following at least one reputable source like the New York Times or Wall Street Journal. You may also want to follow a respected academic institution like Stanford’s Center of Human-Centered Artificial Intelligence. Develop your own personal code of ethics about where you feel comfortable using AI, when disclosure is necessary and when having AI tools as unseen assistants is perfectly reasonable.
There are clear situations when using AI isn’t okay, such as in school when the professor or teacher has not expressly approved its use. Things get a little murkier in the business world, however, so be prepared to answer questions about AI disclosure. One general rule is to disclose any time you’ve used AI instead of trying to pass off its ideas and outputs as your own.
Don’t use a default statement like calling a project AI-assisted. Instead, be specific and state how and where AI pitched in: “AI was used to supply and vet resources for this business plan but not for writing or editing.” Emails between colleagues probably don’t need disclosure, but emailing a newsletter throughout the company or to a subscriber list would: “Some of the articles included here were collected using AI, but all articles were summarized by the editorial team.” Likewise, most people are comfortable with AI used to check grammar or as part of the search process.
There’s infinite nuance here. External communications require more detailed disclosure. Within a company, however, sometimes disclosure might only be necessary at one level. An example is telling your manager that you troubleshot and solved a coding issue with AI. Most companies now have AI disclosure statements and are training employees on what is appropriate use, but an applicant hasn’t had that training. So, you’ll likely be expected to talk about this topic intelligibly and explain your reasoning.
Know What Generative AI Can and Can’t Do Well
Generative AI is getting better at writing, image generation and coding, but it can’t take the place of an individual’s experiences, nor can it produce an image or write about information not in its data set. That’s when it will hallucinate to fill in the gap. The user is responsible for verifying the information generative AI creates, which is an essential skill in the workplace.
Likewise, you need to be able to recognize generative AI outputs. For example, if you know what AI can produce, you’ll more easily recognize an AI-generated business plan because it will be thin unless whoever created it spent time feeding details into the prompt and fine-tuning the output. It can help to know the most common terms, phrases and structure AI uses when it writes, but your experience needs to go deeper than writing an essay or a cover letter.
Reasoners can use multiple data sources to synthesize information, but they can still hallucinate. In fact, they tend to hallucinate even more than their non-reasoning counterparts. Be sure that you’re the employee (or candidate) who has gone beyond the basics of first generation AI tools like ChatGPT. An applicant who has used Grammarly to finish their essay or who posed questions to the free versions of OpenAI’s models is not going to have the depth the workplace needs. Give GPT 5 tasks using the “instant” or “thinking” models rather than relying on the model to choose when the auto mode has been selected. “Thinking” tasks are better suited to reasoners, whereas actually doing a task is the domain of the agents.
To get experience with both, develop a task that engages both models and then ask the model to explain when it used reasoning capabilities and when it acted like an agent. A very common and valuable skill for nearly every job is being able to work with a table of data and turn it into a graphic. Using a spreadsheet as input, ask GPT 5 or CoPilot to create a report complete with graphics and interpretation. The thinking mode will be able to identify trends and speculate why they occurred. The agent or instant mode can make the visuals but will not be able to give deep interpretation. It can make a slide deck, which a reasoner isn’t suited for.
Level Up Your Ability to Use AI
Newer workers who have completed a training program or earned their college degree in the last three years have probably used one of OpenAI’s models or those provided by Google Gemini or Grammarly. Though it might feel like writing this way equipped you for what to expect at work, it hasn’t. Today’s job applicant has to do more than describe how generative AI helped them write an essay or solve a homework problem.
Level up your AI proficiency by using it to improve your processes. Learn how to integrate apps with one another for better productivity. Link your Google Calendar to the AI note-taker in a virtual meeting so that it generates tasks in Asana. Produce a more sophisticated project with integrated animated graphics created from the data charts.
Think of ways to use AI to help accelerate your learning on any topic. Use it to brush up your skills or research a topic related to your work. Nearly all of the models have a study mode now. Forecast what might be needed in your job and start the process of using AI as a personal tutor and study guide. A new employee in the marketing team might be currently tasked with monitoring social channels. Moving up in the position involves knowing how to use Hubspot, so ask the model to help you learn. It will give step-by-step instructions and even sample data sets.
Don’t forget that nearly all apps have integrated AI assistants and they may not be generative AI. Experiment with all of them. Adobe tools for image generation and management have AI tools to streamline processes. Salesforce has embedded predictive and generative AI tools. Asana AI tools can perform automated tasks aligned to workflows. If your employer uses the app, you’ll be expected to use them too.
Tasking AI Without Losing Agency
The more you experiment with AI tools, the easier it becomes to sense the point where initiative will shift from human to machine. The parts of a process that AI cannot supplant — at least not yet — are the bookends: humans initiating action and making final decisions. These are two qualities that make great candidates stand out and that help employees earn promotions.
Using AI means being responsible and accepting the consequences for actions whether performed by a human or by AI, just like any manager would do in a team setting. When you realize that you are responsible for what an AI tool produces, it becomes more clear that you can’t just step out of the picture. AI can produce highly detailed instructions on the actions to take — for example, explaining how to perform CPR and even talk through the steps to use a defibrillator — but the human still places the electrodes and evaluates the situation in real time
Decision-making, meaning things like the choice to do something, to modify that something or to do nothing is still the responsibility of humans, especially in situations where the wrong decisions carry the risk of doing harm. Russ Altman, the Stanford University professor and associate director of Stanford Institute for Human-Centered Artificial Intelligence writes that AI can be developed that is so good it replaces humans, “But a better default would be a doctor who has a useful AI assistant or a driver who gets a lot of help from their car to drive safely. Humans should not be replaced or have agency taken away.”
As a new hire or an applicant, it is unlikely that you’ll be making decisions of this magnitude, but it doesn’t lessen your responsibility to understand your role in responsible and ethical use of AI. There is a line between having AI do work in an effort to improve productivity and having it do all of your work. Going through the mental gymnastics of deciding when and where AI use is appropriate and where it will help you do work better is the essential skill for the future.
