
Case Study
Webinar
This webinar was recorded live on August 22, 2024.
Phishing remains a persistent threat to organizations. A Mandiant trend report from this year said 17% of all compromises are the result of phishing. The Data Breach Investigation Report from Verizon said it was as high as 25%. We don't know an exact number, but somewhere between 17% and 25% of all compromises have phishing as the root—so about a quarter.
It's important to note that threat actor tactics are improving as defenses evolve. Things that worked 10 years ago obviously don't work as effectively today. Threat actors know this and evolve their techniques as defenses adapt to limit their ability to succeed with phishing threats.
There are technical defeats for automated message and link detection techniques. The days of being able to identify good or bad emails based solely on a link are ending. Safe Links in Microsoft 365, for example, is no longer the end-all be-all it was when first introduced.
We even see QR codes instead of links so that someone will use a device that's less protected to visit the site you're trying to keep them from. The link isn't there, but the QR code can be accessed via a camera, and you may not have that camera protected. Adversaries are able to circumvent the defenses you’ve put in place.
To mention, our Suspicious Email Analysis Service team here at Field Effect are our experts in phishing. They deal with the latest and greatest trends and put out a blog earlier this year summarizing the new tactics they saw in 2023. We work behind the scenes with that team to understand the most effective controls for dealing with the newer techniques we’re seeing, as well as the techniques that continue to be persistently used by threat actors.
We'll start walking through those now.
First category will be security awareness.
Fundamentally, it's important to have a phishing reporting service, preferably integrated into your email client so that your users have access to something that can give them a hint whether a message is good or bad. Providing users with a way to safely report a message that appears suspicious is a great start.
It's all about culture. Instead of clicking and hoping for the best, if you're not sure, ask for help. Provide a means for an expert to help your users assess: is this safe or is this not? Develop a culture where you don't just click on things—you ask first before clicking on things. A tool integrated right into your email client is a great way to do this.
Make it a positive experience to report false positives. There's nothing worse than reporting something because you're just not quite sure and getting a message back like, “Of course that's safe. Why would you even send that to us? Why would you waste our time?” The counter to that is they were so interested in being secure that they asked for help. When you don't give a good response for a false positive, what's the likelihood they're going to submit again, even if they're really not sure and it turns out to be malicious? Probably not very good.
Having a culture where it's okay to report false positives in the name of being secure is a great defense. It helps your users understand they're not in it alone; there's going to be support for them in deciding whether an email is good or bad.
The quality of phishing has improved to the point where you can't rely on poor grammar, poor formatting, or typos. Those elements just aren't present in good-quality phishing anymore. Users can be tricked, and they can be tricked effectively enough that, if they aren't encouraged and rewarded for reporting and asking for help, it just leads to unnecessary clicks.
The Field Effect Suspicious Email Analysis Service is available to all Field Effect MDR clients, and integrates directly into your email client. When a user submits a message via their email client, they'll be prompted with a few specific questions:
The SEAS tool also gives the option to remove the message from the inbox. If they don't want it—because they're pretty sure that message is not one they want to come back to—they can click on that and submit.
Once a user has submitted, they'll get rapid feedback. For example, it will say very clearly at the top "Your SEAS result: Malicious". Something that's submitted has now been identified, and you've been told this message is malicious. What should you do? Very clear instructions on what you should do or not do.
Indicators of why the email is malicious are also provided. This is helpful for determining yourself whether something is good or bad. If you submitted something because it didn't quite look right, and you see that what you noticed matches what makes the email malicious—maybe it came from a low-reputation domain, or it contained text that prompts the user in line with social engineering techniques—you get positive reinforcement. If your user gets that positive reinforcement, they will submit again. They’re much less likely to click, and you lower your latent click rate.
This is the kind of control that can drastically help your users be better at identifying phishing, and help your technical teams by not having to respond to as many phishing attacks, because you can get in front of them before your users click on suspicious or malicious links.
So, super important: have a reporting tool that's interactive and gives your users feedback so they can ask the tool or service, “Is this safe to open?” and relatively quickly get a response. That allows them to go on with their activities without having to involve IT personnel or wait through long periods between the reception of the email, the reporting of the email, and the reporting of the results from your tool. The faster, the better, and the smoother for your business operation.
This one’s an oldie but a goodie. The next control we're going to look at is security awareness training. It's a necessary item to educate your workforce on what the state of the art is in phishing. As mentioned earlier, you're not going to have traditional cues of poor grammar and spelling mistakes or templates that don't look quite real or look relatively poor. Phishers have evolved to the point where they can submit really realistic-looking emails.
At Field Effect, we offer professional services where we perform phishing campaigns for our clients. We come up with some very good templates that take an expert to differentiate between real and fake, and phishers out there can do the exact same thing.
So train staff on what to look for. Focus on teaching your users to report all suspicious emails before they click. As we mentioned, having a tool that gives interactive feedback is great to ensure your users get quick responses, can take action themselves, and are empowered to be part of your cybersecurity solution rather than impaired by it or unable to function to the best of their abilities in your business.
Questions to ask:
If something’s unusual, ask for help. If everything looks relatively okay, you're taking a risk by clicking it, but you've done due diligence to assess, “No, I don’t think this is suspicious,” and move on.
One newer change in security awareness training is the need for tailored training for higher-risk staff. This is typically those in your C-suite or others with high public exposure. It’s easy to build profiles on them and tailor phishing because there’s so much information available about these users on the internet. Also your financial staff—this is primarily related to business email compromise. It’s becoming a larger problem. Phishing is its starting point. If your financial staff are aware of procedures, can react to them, and have a strong understanding of what is and isn’t suspicious via email, you’re in a better place to resist phishing against these more highly targeted groups.
Make email security training part of employee onboarding whenever possible. Don’t have it as an add-on. Have it built into the process so that when a new user gets their computer and their accounts, they know what the standard is and how things are done within your organization. If it’s bolted on the side, that’s always an indicator that a security control isn’t as effective as it could be.
Last item related to security awareness training: conduct phishing tests. It's really important to know your latent click rate—the number of users who are going to click on a phishing message they receive—so you can quantify your overall risk of phishing attacks.
The Microsoft Digital Defense Report found a worldwide click rate on phishing emails of about 7.8%, a submission rate to submission tools of about 2.3%, and a report rate of about 11.3%.
If your rates are lower on the good stats (reporting) and/or higher on click rates or credential submission rates, you may want to engage in more security awareness training or phishing training for your staff, because you're clicking more frequently than others. The only way to really know what this ratio is, is to test for it.
A caution about phishing tests: if your goal is to evaluate users, a blind phishing test is a very difficult way to do it, and there are several ethical considerations. Is a blind test against your users fair—especially when you're trying to deceive them? If your goal with phishing tests is to deceive users and punish them via security awareness training, you may want to reconsider the purpose and orient it more around understanding what your latent risk is so you can take other measures to defend against it. That, in our opinion, is the best way to use phishing tests: to lower your overall risk by making adjustments to your other security controls to compensate for users’ willingness to click.
Let's move now into more technical controls—these are things you can do to help your network that fall more within the domain of IT.
The first one is fundamental: use email security solutions. You need spam filtering. You need solutions that will scan all your email and attachments for threats. Automatic quarantining is necessary, and you should check any and all links before a user is able to use them.
These are fundamental security controls available in most email security products at this point. Hopefully you already have them. If you don’t, it’s vitally important that you have scanning solutions in your email processing chain.
Having tools isn't the only thing you need to do. You need to make sure you have logging enabled so that if you have a compromise, you have the audit trails for forensics to determine the root cause of the compromise and when it happened.
The more effective your logging is, the faster forensics will be able to make that determination, and the faster you'll be able to recover from an incident or a compromise.
You want to make sure your authentication logs are on, your audit logging for access to resources is enabled, and your email logging is in place. You also want to ensure you’re retaining these logs for at least 90 days so you can go back as far as needed in an investigation to quickly identify root cause.
You also want to make sure you audit rules in your mailboxes. Look for rules that forward email externally or delete messages. These are a preferred tactic for a threat actor who establishes a beachhead within your network and wants to maintain it without getting caught. They can then send emails that match a format, and they've got a command-and-control channel back into your network.
Follow the principle of least privilege for all your accounts. Hopefully you're using role-based access control—the idea being that a user has only the permissions they need to accomplish their tasks and nothing more. You want this minimized at all times.
Ensure that privileged accounts are segmented from user accounts. We mean high-privileged accounts with access to very sensitive data and administrator accounts. Ideally, those accounts should not be used for internet activities or user-class activities such as web browsing and email, but only for the tasks they’re designated for.
Common practice: a user account, M. Russell, would have an “mRussell-a” or “mRussell-admin” for administrative activities. I wouldn’t have administrative privileges on my standard account. If I were compromised, only my standard account would be compromised, not the one with elevated privileges. This is now best practice, and we hope many of you are engaged in it. Segmenting your user base can drastically lower the impact of a compromise because there is less attack surface for an adversary once they’re initially in your network via a phish.
Be wary of shared and delegated mailboxes. Not everyone practices the same level of security awareness, and all it takes is one user who isn't as cautious to compromise everyone with access to a shared or delegated mailbox. Use them only as necessary, and make sure there's training associated with those accounts so everyone behaves to the same standard when it comes to security awareness.
Audit your accounts regularly. We've already mentioned validating inbox rules—make sure the rules that are there should be there, and that none have been added or look unusual. Flag them, follow up, and deal with them as necessary.
Have alerts for changes to admin and privileged users or groups. A common threat actor technique once they're in your network is to add administrative accounts or privileges to compromised accounts so they can move laterally.
Spot-check your privileged accounts and groups monthly, and all users at least annually, so you have reassurance that users are in the right roles with the right permissions, and that there haven’t been unexpected additions that elevate privileges beyond what they should be.
This is all IT activity—mostly processes, not technology. It’s all something virtually any organization can do if they're willing to dedicate the time and effort.
This is a relatively new one. It’s been in place for a few years, and you can use conditional access policies to govern all access to your organizational resources.
In the context of phishing, you can alert on or deny access to unusual logins. When a credential compromise occurs—when a user gets phished and submits their credentials—threat actors will generally validate those credentials, and they’ll do so through an unusual login.
That login might come from a place that would be subject to impossible travel: a part of the world the legitimate user could not have reached as quickly as the login occurred, because the user was seen somewhere else and the time required to travel between those locations is infeasible.
Unusual logins can also come from suspicious or less secure internet service providers. They’re outside the user’s normal behavior, and these can be red flags that credentials have been compromised and a user within your network has been phished.
If you alert on these, you’ll have a slower but still effective reaction. If you want to be more aggressive, you can deny these unusual logins or even lock accounts that demonstrate them. You can get ahead of phishing in many instances, and even if a user clicks, this type of conditional access control can step in and mitigate the harm very early in the threat actor’s activities.
This is a good policy to have. It’s relatively new—gaining more favor in the last couple of years—and it’s very effective at limiting exposure to your organization if you do have a successful phish.
Next item: DMARC. It’s a series of protocols and configurations related to your mail servers to ensure that mail is actually sent by you—and to ensure that mail sent by others who claim to be you is legitimate.
The purpose of this is to eliminate “spoofable” domains. If you send a message from a domain, DMARC will help ensure that it actually came from that domain and not somewhere else at the time it was created. Since its early introduction in 2016, implementation was relatively slow, but over the last few years it has become standard among organizations. Everyone should be doing it. It’s been a best practice for years and is now essentially an essential practice.
The advantage: if you set up DMARC on your domain, no one can pretend to be you. No one can spoof your brand. For inbound mail, phishing messages can still pass DMARC checks, but they cannot claim to be from organizations that have properly implemented it. If all organizations implement DMARC, spoofing largely disappears, and it becomes much harder for threat actors to conduct phishing activities.
Be wary of allowing your security tools that assist in message processing—like your email gateway—to be allow-listed broadly within your DMARC records. You want to be as restrictive as possible in what is permitted to send email on your behalf. This does the most to ensure that other organizations receiving email from your domain are actually receiving legitimate messages, not spoofs.
At this point, DMARC should be considered an essential security control for all organizations.
We’ll move to a control that’s good for limiting the impact of a phishing campaign against your organization: make sure you’re using a DNS firewall. This takes advantage of security intelligence derived from researchers worldwide to prevent access to known malicious sites. Use either a DNS firewall or protective DNS, depending on your vendor, to ensure that your users can’t access known malicious domains.
If a phisher is using a domain and that domain gets flagged by any researcher, it’s quickly shared across the community. If your organization receives a message with that domain and a user attempts to click it, a DNS firewall will sinkhole the request and prevent the user from being victimized.
It essentially avoids known bad or malicious sites. It doesn’t stop unknowns, but it helps by blocking access to places already identified as suspicious or malicious.
A quick note for Field Effect clients: we offer DNS firewall as part of the Field Effect MDR solution, and it’s configurable through the Field Effect portal. In the portal, go to the Administration tab on the left menu, select DNS Firewall, and enable it. There are settings to fill in, and we have support and how-to guides available in the support portal. If you’re unsure how to enable this or want more information, reach out to your point of contact or the support team here at Field Effect—we’d be very happy to help you. This gives you more containment for any phishing messages your users do click on. Every little bit helps.
Another security control that’s relatively interesting and more of an evolving one is monitoring for typosquat domains.
This takes advantage of something technical people will know: sans serif fonts don’t do a very good job of differentiating between similar-looking characters. For example, Michael@legit.com and Michael@1egit.com might look nearly identical in certain fonts. A phish actor could register DMARC on 1egit.com and make it look as legitimate as possible. The chances that a user would notice the difference are quite low. There is a difference—but over the course of a busy day, is every user going to notice it? Is it fair to ask them to?
A simple change in the default font used in your email client can reveal that the two addresses are very different, and the user can clearly see the difference. A minor detail to detect, but with major impact. Should your users really be responsible for catching something like this? Hopefully they try, but this is truly a technical problem under the hood.
We encourage you to use anti-phishing protections such as Microsoft’s First Contact Safety Tip and Impersonation Protection. If you're using another cloud provider, such as Google, they have equivalent controls that serve the same function.
With First Contact Safety Tip, the first time you receive a message from someone, you get a small flag—“You don’t often get email from….”. Clicking this flag gives you a short phishing education prompt explaining why first contact is dangerous. It may look like Michael@legit.com, but it’s not. These prompts give users a bit of help in identifying lookalike domains.
Impersonation Protection behaves similarly, with similar popup messages. Again, the idea is to give your users as much support as possible in identifying when an address is being spoofed by a typosquat.
For those who don’t know: typosquatting is when someone takes a legitimate domain—like the made-up “legit.com”—and varies one or more characters to create a very similar-looking domain, such as “1egit.com.” They are completely different sites but appear closely related, especially at a glance.
Maintaining a list of typosquat domains frequently used to mimic your organization’s domains adds an extra layer of protection against phishing that tries to blend into what looks normal for your users.
Another control that's less IT and more within your finance team—this one is specifically related to business email compromise via phishing.
Make sure you develop fraud-resistant financial processes. Establish formal procedures for receiving invoices, processing payments, and changing bank account details. Ensure these processes include identity validation that happens outside of email and is hard to circumvent.
Don’t allow invoices, payments, or bank account changes to proceed if you can’t validate identity. This sets your users up to avoid scams that are well crafted and intended to trick them, and instead gives them a fighting chance to resist.
Always ensure there are secondary validation channels you trust for these types of requests. Email alone should not be considered sufficient. Using other channels makes it much harder for an adversary to bypass your controls.
The key point: make business email compromise as administratively difficult as possible. You may get some pushback from your financial teams because this adds extra steps, but they understand—they don’t want to transfer large sums of money to the wrong people. Having these controls in place, even though they are not IT controls, is tremendously useful.
Don’t rely on password-based credentials alone. Microsoft research published papers in 2018 and 2021 about the risk of compromised accounts with and without multifactor authentication, and MFA has been shown to reduce the risk of compromised accounts by 99.9%. That statistic alone should show why you need MFA if you don’t already have it.
If you’re moving to MFA or already have it, make sure your solutions resist what we call MFA fatigue. MFA fatigue is when you pester a user with repeated “yes/no” prompts until they approve one just to make the notifications stop. A PIN-based or number-matching solution is better because the user has to enter a number. They are much less likely to enter a six-digit code just to clear a prompt—they’ll investigate why they’re being asked, which helps prevent accidental approvals.
Anything that reduces the impact of MFA fatigue—where a user accidentally allows something that shouldn’t be allowed—is good practice.
We also recommend staying ahead of threat actors by using phish-resistant MFA when possible. This is a newer class of MFA being pushed by vendors, and you’ll see more of it in the coming years. Historically, this meant FIDO2-compliant hardware tokens. The more modern solution is passkeys.
Google has recently adopted passkeys in their password management systems, and we expect Microsoft and other providers to follow. They are likely the future of multifactor authentication—and authentication in general. It’s something you’ll want to be aware of, read up on, and follow as adoption expands. They are a very strong form of MFA and will become increasingly common now that major identity providers support them.
We’re specifically mentioning MDR as opposed to endpoint detection and response (EDR) because if you don’t have a dedicated security operations center, you should be outsourcing it so that someone is monitoring your network for you.
If you do have the capacity to monitor your own network, an EDR solution works just fine, because it will give you all the alerts you need to determine what to action and what not to. But if you don’t have the capacity to operate your own security operations center, a managed solution will ensure that you get only what you need through triaged alerts, expert support, and catching threat actors early in a compromise because expert analysts are monitoring full-time.
With Field Effect MDR, you get all these protections. We encourage you to consider it. For those who already have it, you have these protections in place and hopefully you’re enjoying the benefits.
It’s vitally important that you have detection and response capabilities on your endpoints so that when your users are phished, there are endpoint solutions monitoring for unusual behavior on the host where the compromise began. Ideally, those alerts will trigger early in the compromise, giving you time to respond while the impact is still relatively minor to your organization.
Fundamentally, this is about having an incident response plan. Know what you’re going to do in the case of a phishing attack. Have a playbook for how you respond—one that all of your incident responders are aware of and know how to follow—so you can quickly get to the root cause of a phishing compromise and reduce the impact by not allowing the threat actor to remain in your network for long.
If you don’t have a plan and you’re a Field Effect MDR client, you can go to the Support Center and, under Policy Templates, you’ll see our Incident Response Plan template and Incident Response Communications Plan template, among many other cybersecurity and information security templates.
These two in particular are “choose your own adventure” templates: text is highlighted in yellow, you decide what to keep and what to remove, and you can quickly build an incident response plan that aligns with the standard we use here at Field Effect. We make these available to all clients through the portal.
Having a plan allows you to respond more efficiently and effectively, which reduces the impact of future compromises and lowers your overall risk from phishing.
Field Effect MDR clients can add the SEAS button to their email toolbar. You can turn on SEAS responses to submitters, and your users can then ask SEAS for help in determining whether a message is legitimate or malicious.
If they ask for help, wait for a response, and avoid acting on a message until they know whether it's safe, you’ll see a much lower click rate. Your users will be more comfortable submitting and asking for help because they know they'll get a response, they won't be treated poorly for submitting something harmless, and the response will be timely enough for them to move on with their work if the message is legitimate.
Let our experts determine whether a message is safe with SEAS. Help your users with this, because phishing techniques keep getting more complicated and the quality of phishing messages continues to improve.
Turn on the DNS firewall within Field Effect MDR to ensure your users can’t visit known malicious sites. You can create exceptions for users who need access for investigation, such as your security operations staff, but the vast majority of users do not need access to anything identified as malicious—and a DNS firewall helps avoid accidental visits.
Download and complete an incident response plan if you don’t have one already. And if you're looking for more help or need deeper analysis, please consider Field Effect's professional services.
An example of one professional service is our Incident Response Readiness Service. This is for organizations looking to be better prepared to respond to an incident, looking to write an incident response plan with help, and looking to evaluate and understand their threat surface.
We conduct analysis across a number of topics related to incident response readiness and threat surface management. The process begins with completing a two-hour survey. We then send back a tailored IR plan and incident response playbooks.
We work with you to discuss your readiness and your threat surface to ensure you understand where improvements are needed to lower your risk of incidents occurring within your network.
We also do check-ins after these discussions to make sure you're on track and to answer any questions that come up. Recommendations are easy to make, but can be much harder to implement effectively. As part of this service, we ensure that clients are able to implement our recommendations and we work through any challenges they encounter during the implementation process.
For organizations that are not already Field Effect MDR clients, the Incident Response Readiness Service includes a 90-day trial of Field Effect MDR.
Push-based MFA is where you get a prompt, usually on your phone, that says something like, “Is this you logging in?” and you tap Yes or No. This was one of the earlier forms of MFA we saw via apps. It was a big improvement at the time because you were no longer relying solely on credentials; you had that second factor.
Unfortunately, threat actors figured out they could abuse this. They repeatedly send push prompts—often at three in the morning—until the user, just wanting the notifications to stop, finally taps Yes. That’s MFA fatigue.
Because a simple yes/no control wasn’t working as well as it could, PIN-based (or number-matching) MFA became more widespread. Here, the application shows you a number or code, and you must enter that number into the login prompt to approve the sign-in. It’s much less likely that someone will type a six-digit number “just to make it go away”; they’ll stop and think, Why am I getting this? I didn’t try to log in.
PINs and one-time codes are not new technology, but their use in MFA grew as a way to resist MFA fatigue. However, PINs can still be intercepted in man-in-the-middle attacks, so they’re not perfect.
That’s why we now talk about phishing-resistant MFA, such as FIDO2 hardware tokens and, more recently, passkeys. These are the next step as threat actors learn to phish both push prompts and PINs. It’s an arms race between attackers and defenders. Phishing-resistant MFA is where organizations will want to move as time goes on.
Right now, threat actors are very capable of dealing with push-based notifications, and you should look to phase those out. PINs still work but are actively being targeted. Moving toward phishing-resistant MFA is where we’d like to see people go.
That technique—using different character sets or Unicode lookalikes—has existed for a long time, but it’s not as widely used or widely known as classic typosquatting.
You can manage this with tools like DNSTwist, an open-source application that can search for a wider set of lookalike domains, including those using Unicode characters. You can also configure restrictions that more closely follow the RFC for domain names, limiting allowed character sets. That can reduce your exposure to alternate character sets, but it may also affect international users who rely on non-ASCII characters.
Unfortunately, there’s no perfect solution here. The system (DNS, domain names, fonts) wasn’t designed with security as the primary goal; it was designed for ease of use. In some cases, you must trade off a bit of that ease of use in order to reduce risk, or rely on technical tools that scan wider character sets and try to identify all possible typosquats, not just the obvious ones.
Hopefully! It's important to have empathy toward your user base. It’s not their fault that threat actors are very capable. Yes, we hope users notice the signs, but it’s not reasonable to expect them to carry the entire burden alone. You need technical controls in place to reduce both the impact and the likelihood of successful phishing—that’s what a lot of this presentation was about.
Will regulatory pressure help? Yes, because many people think the same way: it can’t just be on your users. If you discourage or punish false positives, you’re also discouraging true positives, which are usually harder to spot than the obviously suspicious ones. If users are reluctant to report anything “in case they’re wrong,” you won’t hear about meaningful threats.
If there isn’t empathy and support for reporting false positives, the whole battle against phishing falls apart. Users won’t feel empowered; they’ll hide mistakes and avoid reporting. Even when they know they’ve been phished, they might not come forward for fear of being reprimanded.
I’ve personally been ridiculed in the past for submitting something that “looked fine” to a CISO—only for it to come back as malicious. I noticed something off about how the request was phrased. It didn’t match how that person would normally ask for something. It felt wrong, so I sent it in. At first I was teased; then, once it came back as malicious, I wasn’t teased anymore.
If organizations don’t show empathy and support, they’ll end up containing bigger incidents instead of responding quickly to smaller ones. It’s far better to encourage cautious behavior—even if it means more false positives—than to silence users and miss the real threats.
Yes, but mostly via security tools rather than the email clients themselves. Within typical email tools, there isn’t a simple, built-in “check all inbox rules for everyone” button.
However, in Field Effect MDR, this is one of the analytics we run behind the scenes. We look for suspicious inbox rules that have a whole bunch of characteristics that make them suspicious. Then we produce alerts for our client base that warn them that these rules may be malicious. You may want to take action to remove them, advise your user, and contain what could be an incident against that user.
Within the actual tools, I'm much less aware of techniques built in, but you could audit those tools and script the technique to audit them, or get a security tool that will do it for you. For example, Field Effect MDR does that for you if you have it enabled to connect to your email provider.
By “automated,” I'm assuming you mean that as messages come in through your gateway, they're automatically processed within SEAS. No, because that would be a drastic violation of user privacy. SEAS is intended to be something done with approval. The user decides to make a submission and they agree to share private information to get help.
Doing it automatically is a way you could design the service, but it's not the way we chose to design CS, and I don’t believe we plan to change that. However, you could introduce tools into your network that would do this behind the scenes and add such detection within your email security products.
If you have high-end email security products, you likely already have functionality that can do this. It would be something along the lines of deep inspection, and I encourage you to check the vendor documentation because there may be methods available now.
A quick search will reveal a number of reputable email security tools and gateways.
Traditionally, we encouraged users to look at the link. Now links are very long and use character tricks that can obscure where you're actually going. Looking at a link may not be the most effective way, but it’s still a good practice. Right-click on the link and see what it actually is, not just what it says in the email, because the text and the actual link are often different.
If you're using Microsoft-based products, enable Safe Links on the backend so that every link a user sees goes through the Safe Links process. Plus, have a DNS firewall in place so that if a user clicks on a link and tries to go to a site, it goes through your DNS firewall, and if the site is known to be malicious, it can intercept and block the user before they reach it.

