To Csv Converter — Jws

pip install PyJWT pandas import base64 import json import csv import sys import pandas as pd from pathlib import Path def decode_jws_payload(jws_token): """Decode the payload (second part) of a compact JWS.""" try: parts = jws_token.split('.') if len(parts) != 3: raise ValueError("Invalid compact JWS: expected 3 parts") # Decode base64url (add padding if needed) payload_b64 = parts[1] # Add padding for base64 decoding padding = '=' * (4 - (len(payload_b64) % 4)) payload_bytes = base64.urlsafe_b64decode(payload_b64 + padding) return json.loads(payload_bytes) except Exception as e: return "error": str(e), "raw_token": jws_token[:50]

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjMiLCJyb2xlIjoidXNlciIsImV4cCI6MTczNTY4OTAwMH0.signature1 eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiI0NTYiLCJyb2xlIjoiYWRtaW4iLCJleHAiOjE3MzU2ODkwMDB9.signature2 python jws_to_csv.py tokens.txt output.csv --fields sub,role jws to csv converter

df = pd.DataFrame(rows) df.to_csv(output_file, index=False) print(f"✅ Converted len(rows) tokens to output_file") if == " main ": # Example usage jws_to_csv("tokens.txt", "output.csv", fields_of_interest=["sub", "exp", "tenant_id"]) Step 3: Handling nested claims Sometimes your JWS payload contains nested objects: pip install PyJWT pandas import base64 import json

To flatten these into CSV columns (e.g., user.id , permissions.0 ), you can use pandas.json_normalize() instead of the direct DataFrame constructor. However, you may want to add a signature_valid

from pandas import json_normalize normalized = json_normalize(payload) rows.append(normalized.iloc[0].to_dict()) What About Invalid or Expired Signatures? A pure converter doesn’t need to verify the signature – it just decodes the payload. However, you may want to add a signature_valid column using a cryptographic library (e.g., cryptography or jwt with verification disabled first, then verified separately).

If you work with JWT (JSON Web Tokens) or JWS (JSON Web Signatures) in logging, analytics, or batch processing, you’ve likely run into the same headache: how do you analyze hundreds or thousands of these tokens in a human-readable way?

"user": "id": 123, "name": "Alice", "permissions": ["read", "write"]