Friday, June 13, 2025

Eigenvalue Infinite Data Streaming

 #An interesting method for streaming eigenvalues


import socket

import struct

import numpy as np

import matplotlib.pyplot as plt



# --- Utility Functions ---

def _calculate_arcsecant(val):

    """

    Safely calculates arcsecant (arccos(1/val)) for a given value.

    Returns NaN if |val| < 1.

    """

    if np.abs(val) < 1:

        return np.nan

    return np.arccos(1 / val) # np.abs(val) might be used if desired to map negative and positive

                               # eigenvalues to the same arcsecant range [0, pi/2).

                               # For true arcsec(x), keep as 1/val. The problem asks for "arcsecant functions"

                               # implying the standard definition. Let's use 1/val and handle range [0, pi].



def pseudo_interpolate_arcsecant_stream(x_data_bytes, y_data_bytes, x_interp_chunk_bytes):

    """

    Pseudo-interpolates a chunk of interpolation points using pre-existing x_data and y_data.

    Expects x_data, y_data, and x_interp_chunk as packed binary data (arrays of floats).

    Refitted to accept eigenvalues for arcsecant packing range.

    """

    try:

        # Unpack binary data into NumPy arrays

        num_x = len(x_data_bytes) // 8

        num_y = len(y_data_bytes) // 8


        if num_x != num_y or num_x < 2:

            raise ValueError("X and Y data must have equal length and at least two points.")


        fx = np.array(struct.unpack(f'!{num_x}d', x_data_bytes))

        fy = np.array(struct.unpack(f'!{num_y}d', y_data_bytes))


        # Unpack the current chunk of interpolation points

        num_interp_chunk = len(x_interp_chunk_bytes) // 8

        x_interp_chunk = np.array(struct.unpack(f'!{num_interp_chunk}d', x_interp_chunk_bytes))


        min_x, max_x = np.min(fx), np.max(fx)

        

        # --- Refitting for Eigenvalue Arcsecant Packing Range ---

        # The 'packing range' for arcsecant means defining a representative range of inputs

        # (eigenvalues) that are relevant to the problem, and then finding the min/max

        # arcsecant values over that range.

        

        # Assuming eigenvalues are typically positive and >= 1 for arcsecant.

        # If eigenvalues can be in (-inf, -1], they would map to (pi/2, pi].

        # For simplicity, let's assume we are primarily interested in the positive domain

        # for "packing", as is common for eigenvalues in many applications.

        # If eigenvalues can be small (e.g., < 1), they fall outside arcsecant's domain.

        

        # To define a robust 'packing range', we should consider the range of 'x' values

        # in the provided fx data, as this defines the context of the interpolation.

        

        # Create a representative set of points for the arcsecant domain calculation.

        # These points should be within the valid domain for arcsecant (i.e., |val| >= 1).

        # We can derive this range from the `fx` data, ensuring we don't pick values in (-1, 1).

        

        # Determine the effective range of eigenvalues for arcsecant domain.

        # If min_x is less than 1 (e.g., 0.5), we should start our arcsecant domain from 1.0.

        # If min_x is already >= 1, use it.

        

        effective_min_eigenvalue = 1.0 if min_x < 1.0 else min_x

        # Max eigenvalue could be max_x or a very large number if we want to represent "infinity"

        # up to a practical limit. For now, use max_x as the upper bound for the representative range.

        effective_max_eigenvalue = max_x if max_x >= effective_min_eigenvalue else effective_min_eigenvalue + 1.0

        

        # Generate representative eigenvalues for the arcsecant domain.

        # We ensure they are spread across the relevant range, including large values.

        # Using a logarithmic spacing might be beneficial for capturing the asymptotic behavior

        # towards infinity (where arcsecant approaches pi/2) more effectively if max_x is very large.

        

        # To handle potential large spreads in 'fx', we can create a `representative_eigenvalues`

        # array that covers the domain where arcsecant is defined and relevant.

        # For very large eigenvalues, arcsecant values cluster near pi/2.

        

        # Option 1: Linear spacing from 1 to max_x (or a chosen practical upper limit).

        # Ensures that values >= 1 are covered.

        if effective_max_eigenvalue <= effective_min_eigenvalue: # Handle cases where fx has very narrow or inverted range

             representative_eigenvalues = np.array([effective_min_eigenvalue, effective_min_eigenvalue + 1.0])

        else:

            # Generate points log-scaled to capture the behavior near 1 and as values get very large

            # We add a small epsilon to avoid log(0) if min_x is very small but we still clamp to 1.

            # Example: 1 to 1000, covering relevant arcsecant range (0 to ~pi/2)

            representative_eigenvalues = np.logspace(np.log10(effective_min_eigenvalue), np.log10(effective_max_eigenvalue + 1e-9), 100)

            # Ensure at least one very large value is considered if max_x is not already huge

            if effective_max_eigenvalue < 1e10: # Arbitrary large value

                representative_eigenvalues = np.unique(np.concatenate((representative_eigenvalues, [1e10, 1e12])))

            

        # Calculate arcsecant values for this representative range

        # Filter out any NaNs if _calculate_arcsecant returns them (e.g., if any representative_eigenvalues were somehow < 1)

        # Note: If `_calculate_arcsecant` uses `1/val`, values can go from [0,pi]. If `1/np.abs(val)` it is [0,pi/2).

        # Let's assume standard arcsec(x) for simplicity, i.e., arccos(1/x) which maps to [0, pi].

        

        calculated_domain_values = [_calculate_arcsecant(val) for val in representative_eigenvalues if _calculate_arcsecant(val) is not np.nan]

        

        if not calculated_domain_values:

            # Fallback if no valid arcsecant values could be generated for the domain

            min_arcsecant, max_arcsecant = 0.0, np.pi # Standard range for arccos

        else:

            min_arcsecant = np.min(calculated_domain_values)

            max_arcsecant = np.max(calculated_domain_values)


        # Handle the case where min_arcsecant and max_arcsecant are identical (e.g., if all inputs are effectively infinite)

        arcsecant_range_is_zero = (max_arcsecant - min_arcsecant) == 0


        interp_y = []

        for x in x_interp_chunk:

            # Find the two closest points in fx for interpolation

            idx1 = np.argmin(np.abs(x - fx))

            idx2 = (idx1 + 1) % len(fx) 

            y1, y2 = fy[idx1], fy[idx2]


            # Use the input 'x' (eigenvalue) directly for arcsecant calculation

            arcsec_val = _calculate_arcsecant(x)

            

            if np.isnan(arcsec_val) or arcsecant_range_is_zero:

                # If arcsec_val is NaN (eigenvalue < 1) or the packing range is trivial,

                # fall back to using y1 or a simple linear interpolation based on 'x' directly.

                # For this refit, we use y1 as a conservative fallback.

                interp_y.append(y1) 

            else:

                # Apply the scaling using the dynamically determined arcsecant range

                interp_y.append(y1 + (y2 - y1) * (arcsec_val - min_arcsecant) / (max_arcsecant - min_arcsecant))


        return np.array(interp_y).tobytes()


    except struct.error:

        raise ValueError("Invalid binary data format.")

    except ValueError as e:

        raise ValueError(str(e))

    except RuntimeWarning as e: # Catch RuntimeWarnings explicitly

        print(f"Warning in pseudo_interpolate_arcsecant_stream: {e}")

        raise RuntimeError(f"Interpolation warning: {e}") 

    except Exception as e:

        raise Exception(f"An unexpected error occurred during interpolation: {e}")



# --- Server-Side Logic (Remains unchanged as the core logic change is in pseudo_interpolate_arcsecant_stream) ---



def handle_client_stream(client_socket):

    """

    Handles communication with a single client for streaming pseudo-interpolation.

    The client first sends fx and fy data, then continuously sends chunks of x_interp.

    """

    try:

        # 1. Initial setup: Receive fx and fy data (assumed to be sent once per connection)

        # Receive fx length and data

        fx_length_bytes = client_socket.recv(4)

        if not fx_length_bytes: return

        fx_length = struct.unpack('!I', fx_length_bytes)[0]

        fx_data_bytes = b''

        while len(fx_data_bytes) < fx_length:

            chunk = client_socket.recv(4096)

            if not chunk: return

            fx_data_bytes += chunk


        # Receive fy length and data

        fy_length_bytes = client_socket.recv(4)

        if not fy_length_bytes: return

        fy_length = struct.unpack('!I', fy_length_bytes)[0]

        fy_data_bytes = b''

        while len(fy_data_bytes) < fy_length:

            chunk = client_socket.recv(4096)

            if not chunk: return

            fy_data_bytes += chunk


        print("Received initial fx and fy data from client.")


        # 2. Continuous streaming: Receive and process x_interp chunks

        while True:

            # Receive length of the next x_interp chunk

            interp_x_chunk_length_bytes = client_socket.recv(4)

            if not interp_x_chunk_length_bytes:

                print("Client disconnected or finished sending stream.")

                break # Client has disconnected or finished sending data


            interp_x_chunk_length = struct.unpack('!I', interp_x_chunk_length_bytes)[0]

            if interp_x_chunk_length == 0: # A signal to indicate end of stream, if agreed upon

                print("Received end-of-stream signal (zero length chunk).")

                break


            interp_x_chunk_data_bytes = b''

            while len(interp_x_chunk_data_bytes) < interp_x_chunk_length:

                chunk = client_socket.recv(4096)

                if not chunk:

                    print("Client disconnected during chunk reception.")

                    return

                interp_x_chunk_data_bytes += chunk


            # Perform the pseudo-interpolation on the received chunk

            interp_y_binary_chunk = pseudo_interpolate_arcsecant_stream(

                fx_data_bytes, fy_data_bytes, interp_x_chunk_data_bytes

            )


            # Send the binary interpolated y data chunk back to the client

            interp_y_chunk_length = len(interp_y_binary_chunk)

            client_socket.sendall(struct.pack('!I', interp_y_chunk_length))

            client_socket.sendall(interp_y_binary_chunk)

            # print(f"Processed and sent back a chunk of {len(interp_y_binary_chunk)} bytes.")


    except ValueError as e:

        print(f"ValueError on server (streaming): {e}")

        error_message = str(e).encode('utf-8')

        client_socket.sendall(struct.pack('!I', len(error_message)))

        client_socket.sendall(error_message)

    except RuntimeError as e: # Catch custom RuntimeErrors from interpolation function

        print(f"RuntimeError on server (streaming): {e}")

        error_message = str(e).encode('utf-8')

        client_socket.sendall(struct.pack('!I', len(error_message)))

        client_socket.sendall(error_message)

    except ConnectionResetError:

        print("Client forcibly closed the connection during streaming.")

    except Exception as e:

        print(f"An unexpected error occurred in handle_client_stream: {e}")

    finally:

        client_socket.close()

        print("Connection with client closed.")



def start_server_stream(host, port):

    """

    Starts a server to listen for incoming binary data streams for pseudo-interpolation.

    """

    server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)

    server_socket.bind((host, port))

    server_socket.listen(5)

    print(f"Server listening on {host}:{port} for streaming data...")


    while True:

        client_socket, addr = server_socket.accept()

        print(f"Accepted streaming connection from {addr}")

        handle_client_stream(client_socket)



if __name__ == "__main__":

    SERVER_HOST = '127.0.0.1'

    SERVER_PORT = 12345

    start_server_stream(SERVER_HOST, SERVER_PORT)


Friday, May 30, 2025

0xA0 - Nearest neighbour tracing in nuclear automata

Radical! Nearest neighbour tracing in nuclear automata. 

 

import numpy as np
import matplotlib.pyplot as plt
import pubchempy as pcp
import requests
import json
import time
import re

# --- Scientific Constants ---
# Nuclear Physics Constants
CURIE_TO_BQ = 3.7e10           # Conversion factor: 1 Curie = 3.7 x 10^10 Becquerels (disintegrations per second)
LN2 = np.log(2)                # Natural logarithm of 2, used in half-life calculations

# Blackbody Radiation Constants
STEFAN_BOLTZMANN_CONSTANT = 5.670374419e-8 # Stefan-Boltzmann constant in W/(m^2·K^4)
PLANCK_CONSTANT = 6.62607015e-34 # Planck's constant in J·s
BOLTZMANN_CONSTANT = 1.380649e-23 # Boltzmann constant in J/K
SPEED_OF_LIGHT = 2.99792458e8 # Speed of light in a vacuum in m/s
WIEN_DISPLACEMENT_CONSTANT = 2.897771955e-3 # Wien's displacement law constant in m·K

# --- Manual Nuclear Data for Gamma Emission ---
# This dictionary stores known half-lives and gamma emission properties for isotopes.
# IMPORTANT: These values should be verified from reliable nuclear data sources (e.g., NNDC).
# PubChem provides general information, but specific gamma yields are best from dedicated databases.
MANUAL_GAMMA_DATA = {
    "Cesium-137": {
        "half_life_days": 30.08 * 365.25, # Half-life in days (approx 30.08 years)
        "gamma_energies_MeV": [0.662],
        "gamma_yields_per_decay": [0.851] # Gamma yield per decay (from Ba-137m isomer)
    },
    "Cobalt-60": {
        "half_life_days": 1925.21, # Half-life in days
        "gamma_energies_MeV": [1.173, 1.332],
        "gamma_yields_per_decay": [0.999, 0.999] # Approximately 1 gamma per decay at each energy
    },
    "Iodine-131": {
        "half_life_days": 8.02, # Half-life in days
        "gamma_energies_MeV": [0.364],
        "gamma_yields_per_decay": [0.817] # Gamma yield per decay
    },
    "Potassium-40": {
        "half_life_days": 1.251e9 * 365.25, # Half-life in days (~1.251 billion years)
        "gamma_energies_MeV": [1.4608],
        "gamma_yields_per_decay": [0.1067] # ~10.67% of K-40 decays via electron capture
    },
    # Feel free to add more isotopes here with their respective data
}

# --- PubChem Helper Functions (for half-life lookup if not in manual data) ---
# These functions attempt to fetch half-life data from PubChem,
# but their parsing can be fragile due to variations in PubChem's data structure.
def _get_compound_cid_pubchem(compound_name):
    """
    Searches PubChem for a compound's CID (Compound ID) by name.
    Returns the CID if found, otherwise None.
    """
    try:
        compounds = pcp.get_compounds(compound_name, 'name')
        return compounds[0].cid if compounds else None
    except Exception:
        return None

def _fetch_half_life_from_pubchem(cid):
    """
    Fetches half-life data from PubChem using a compound's CID.
    Attempts to parse numerical value and unit from the returned text.
    Returns (value, unit) tuple if successful, otherwise (None, None).
    """
    base_url = "https://pubchem.ncbi.nlm.nih.gov/rest/pug_view/data/compound/"
    url = f"{base_url}cid/{cid}/JSON/"
    try:
        response = requests.get(url, timeout=5)
        response.raise_for_status() # Raise an exception for HTTP errors
        data = response.json()

        search_sections = ['Atomic Mass, Half Life, and Decay', 'Chemical and Physical Properties', 'Depositor-Supplied']
        for record in data.get('Record', {}).get('Section', []):
            if record.get('TOCHeading') in search_sections:
                for prop_section in record.get('Section', []):
                    if 'half-life' in prop_section.get('TOCHeading', '').lower():
                        for item in prop_section.get('Information', []):
                            for value_dict in item.get('Value', {}).get('StringWithMarkup', []):
                                text = value_dict.get('String')
                                if text:
                                    # Use regex to find numerical value and unit
                                    match = re.match(r"(\d+\.?\d*(?:e[+-]?\d+)?)\s*(seconds|minutes|hours|days|years|s|min|hr|d|y)", text.lower())
                                    if match:
                                        return float(match.group(1)), match.group(2)
        return None, None
    except Exception:
        return None, None

def _convert_half_life_to_days(value, unit):
    """
    Converts a half-life value from various units to days.
    Returns the half-life in days, or None if the unit is unrecognized.
    """
    if unit in ['s', 'seconds']: return value / (24 * 3600)
    if unit in ['min', 'minutes']: return value / (24 * 60)
    if unit in ['hr', 'hours']: return value / 24
    if unit in ['d', 'days']: return value
    if unit in ['y', 'years']: return value * 365.25
    return None

# --- Core Radiation Calculation Functions ---
def calculate_decay_constant(half_life_days):
    """
    Calculates the radioactive decay constant (lambda) from the half-life.
    Returns the decay constant in inverse seconds (s^-1).
    If half_life_days is non-positive, returns 0 (implying a stable isotope).
    """
    if half_life_days is None or half_life_days <= 0:
        return 0
    half_life_seconds = half_life_days * 24 * 3600 # Convert days to seconds
    return LN2 / half_life_seconds

def calculate_activity_at_time(initial_activity_Bq, decay_constant, time_elapsed_seconds):
    """
    Calculates the activity (number of decays per second) of a radioactive source
    at a given time, using the decay law (N = N0 * e^(-lambda*t)).
    Returns activity in Becquerels (Bq).
    """
    if decay_constant == 0:
        return initial_activity_Bq # For stable isotopes or infinite half-life
    return initial_activity_Bq * np.exp(-decay_constant * time_elapsed_seconds)

def get_gamma_curies_single_point(isotope_name, initial_activity_Bq, time_elapsed_days):
    """
    Calculates the total gamma curies emitted by an isotope at a specific time.
    It first tries to get half-life from the `MANUAL_GAMMA_DATA`, then falls back to PubChem.
    Returns the gamma activity in Curies (Ci), or None if half-life data isn't found.
    """
    half_life_days = None
    if isotope_name in MANUAL_GAMMA_DATA:
        half_life_days = MANUAL_GAMMA_DATA[isotope_name]["half_life_days"]
    else:
        # If not in manual data, try PubChem
        cid = _get_compound_cid_pubchem(isotope_name)
        if cid:
            hl_val, hl_unit = _fetch_half_life_from_pubchem(cid)
            if hl_val:
                half_life_days = _convert_half_life_to_days(hl_val, hl_unit)

    if half_life_days is None:
        return None # Could not determine half-life

    gamma_yields = MANUAL_GAMMA_DATA.get(isotope_name, {}).get("gamma_yields_per_decay", [0])
    total_gamma_yield_per_decay = sum(gamma_yields)

    decay_constant = calculate_decay_constant(half_life_days)
    time_elapsed_seconds = time_elapsed_days * 24 * 3600

    current_activity_Bq = calculate_activity_at_time(initial_activity_Bq, decay_constant, time_elapsed_seconds)
    total_gamma_disintegrations_per_second = current_activity_Bq * total_gamma_yield_per_decay
    gamma_curies = total_gamma_disintegrations_per_second / CURIE_TO_BQ
    
    return gamma_curies

def calculate_blackbody_spectral_radiance(wavelength_m, temperature_K):
    """
    Calculates the spectral radiance of a blackbody at a given wavelength and temperature,
    using Planck's Law.
    Returns spectral radiance in Watts per steradian per square meter per meter of wavelength (W/(m³·sr)).
    Returns 0.0 if temperature or wavelength are non-positive or if the exponent is too large.
    """
    if temperature_K <= 0 or wavelength_m <= 0:
        return 0.0

    term1 = (2 * PLANCK_CONSTANT * SPEED_OF_LIGHT**2) / (wavelength_m**5)
    term2_exp_arg = (PLANCK_CONSTANT * SPEED_OF_LIGHT) / (wavelength_m * BOLTZMANN_CONSTANT * temperature_K)
    
    # Check to prevent numerical overflow for very large exponent arguments
    if term2_exp_arg > 700:
        return 0.0 # Effectively zero emission at this wavelength/temperature
        
    spectral_radiance = term1 / (np.exp(term2_exp_arg) - 1)
    return spectral_radiance

# --- Utility Function for Generating Bell Curve Data ---
def generate_normal_distribution_series(mean, std_dev, num_points=200, spread_factor=3):
    """
    Generates a series of values that follow a normal (bell curve) distribution.
    This is used to create a distribution for a parameter, like measurement time.
    
    Args:
        mean (float): The mean (average) of the distribution.
        std_dev (float): The standard deviation of the distribution.
        num_points (int): The number of data points to generate.
        spread_factor (int): How many standard deviations away from the mean
                              the series should span (e.g., 3 for 3-sigma range).
    
    Returns:
        tuple: A tuple containing:
            - series_values (numpy.array): The generated data points.
            - pdf_values (numpy.array): The probability density function (PDF) values
                                        corresponding to `series_values`.
    """
    start = mean - spread_factor * std_dev
    end = mean + spread_factor * std_dev
    if start < 0:
        start = 0 # Ensure time or other physical quantities are non-negative
    
    series_values = np.linspace(start, end, num_points)
    pdf_values = (1 / (std_dev * np.sqrt(2 * np.pi))) * \
                 np.exp(-0.5 * ((series_values - mean) / std_dev)**2)
    return series_values, pdf_values

# --- Analysis Functions for "Nearest Neighbor Boundaries" and "Displacement" ---

def analyze_gamma_curies_nearest_neighbor_boundaries(isotope_name, initial_activity_Bq, 
                                                     mean_time_days, std_dev_time_days, 
                                                     num_points=200, derivative_threshold_abs=None):
    """
    Calculates gamma curies over a series of measurement times sampled from a bell curve.
    It identifies "nearest neighbor boundaries" by highlighting regions where the
    absolute rate of change (derivative) of gamma curies is above a given threshold.
    This represents where the decay process is changing most rapidly.
    
    Args:
        isotope_name (str): The name of the radioactive isotope (e.g., "Cesium-137").
        initial_activity_Bq (float): The initial activity of the isotope in Becquerels (Bq).
        mean_time_days (float): The mean (average) of the bell curve distribution for measurement times in days.
        std_dev_time_days (float): The standard deviation of the bell curve for measurement times in days.
        num_points (int): The number of time points to simulate.
        derivative_threshold_abs (float, optional): The absolute threshold for the derivative (Ci/day).
                                                    If None, the program will auto-calculate a threshold
                                                    (e.g., 10% of the maximum absolute derivative).
    """
    print(f"\n--- Identifying 'Nearest Neighbor Boundaries' in Gamma Curies ---")
    print(f"  Isotope: {isotope_name}, Initial Activity: {initial_activity_Bq:.2e} Bq")
    print(f"  Time Distribution: Mean={mean_time_days:.2f} days, Std Dev={std_dev_time_days:.2f} days")
    print(f"  'Nearest Neighbor Boundaries' are regions where the decay rate changes significantly.")

    # 1. Generate time series based on a normal distribution
    time_series_days, pdf_values = generate_normal_distribution_series(
        mean_time_days, std_dev_time_days, num_points=num_points)
    
    # 2. Calculate gamma curies for each time point
    curies_series = []
    for t_days in time_series_days:
        curies = get_gamma_curies_single_point(isotope_name, initial_activity_Bq, t_days)
        curies_series.append(curies if curies is not None else np.nan) # Store None as NaN
    curies_series = np.array(curies_series) # Convert to NumPy array for numerical operations

    # Handle cases where half-life was not found (curies_series will be all NaN)
    if np.all(np.isnan(curies_series)):
        print(f"  Calculation failed. Please ensure '{isotope_name}' has half-life data.")
        return

    # 3. Calculate the numerical derivative (rate of change) of gamma curies with respect to time
    # np.gradient provides a robust way to estimate the derivative for discrete data.
    derivative_curies = np.gradient(curies_series, time_series_days)

    # 4. Determine or use a threshold for identifying "boundaries"
    if derivative_threshold_abs is None:
        # If no threshold is provided, auto-calculate it as 10% of the maximum absolute derivative
        # This highlights the top 10% steepest changes.
        max_abs_deriv = np.max(np.abs(derivative_curies[np.isfinite(derivative_curies)]))
        derivative_threshold_abs = max_abs_deriv * 0.1
        print(f"  Auto-calculated derivative threshold: {derivative_threshold_abs:.2e} Ci/day")
    else:
        print(f"  Using provided derivative threshold: {derivative_threshold_abs:.2e} Ci/day")

    # 5. Identify points where the absolute derivative exceeds the threshold
    boundary_indices = np.where(np.abs(derivative_curies) > derivative_threshold_abs)
    boundary_times = time_series_days[boundary_indices]
    boundary_curies = curies_series[boundary_indices]

    # --- Plotting Results ---
    plt.figure(figsize=(12, 7))
    plt.plot(time_series_days, curies_series, label='Gamma Curies Decay')
    plt.scatter(boundary_times, boundary_curies, color='red', s=50, 
                label='Nearest Neighbor Boundaries (High Rate of Change)', zorder=5)
    
    plt.title(f'Gamma Curies with "Nearest Neighbor Boundaries" for {isotope_name}')
    plt.xlabel('Time (days)')
    plt.ylabel('Gamma Curies (Ci)')
    plt.grid(True)
    plt.legend()
    plt.tight_layout()
    plt.show()

    # Optional: Plot the derivative itself for more insight
    plt.figure(figsize=(10, 5))
    plt.plot(time_series_days, derivative_curies, label='Rate of Change (dCuries/dt)')
    plt.axhline(y=derivative_threshold_abs, color='grey', linestyle='--', label='Positive Threshold')
    plt.axhline(y=-derivative_threshold_abs, color='grey', linestyle='--', label='Negative Threshold')
    plt.title('Rate of Change of Gamma Curies over Time')
    plt.xlabel('Time (days)')
    plt.ylabel('dCuries/dt (Ci/day)')
    plt.grid(True)
    plt.legend()
    plt.tight_layout()
    plt.show()

    print(f"\n  Identified {len(boundary_times)} data points in 'Nearest Neighbor Boundary' regions.")
    if len(boundary_times) > 0:
        print("  Example Boundary Points (Time, Curies, Derivative):")
        # Print a few examples from the identified boundary points
        for i in range(min(5, len(boundary_times))):
            idx = boundary_indices[0][i] # Get the original index from the series
            print(f"    Time: {time_series_days[idx]:.2f} days, Curies: {curies_series[idx]:.2e} Ci, Derivative: {derivative_curies[idx]:.2e} Ci/day")

def simulate_blackbody_wien_displacement(temperatures_K, wavelength_range_nm=(1, 10000)):
    """
    Simulates blackbody spectral radiance across a range of wavelengths for given temperatures,
    and demonstrates Wien's Displacement Law by showing how the peak emission wavelength
    ("displaces" or shifts) with temperature.
    
    Args:
        temperatures_K (list): A list of temperatures in Kelvin (K) to simulate.
        wavelength_range_nm (tuple): A tuple (min_wavelength, max_wavelength) in nanometers.
    """
    print(f"\n--- Simulating Blackbody Spectral Radiance & Wien's Displacement Law ---")
    print(f"  Analyzing temperatures: {temperatures_K} K")
    
    wavelengths_nm = np.linspace(wavelength_range_nm[0], wavelength_range_nm[1], 500)
    wavelengths_m = wavelengths_nm * 1e-9 # Convert nanometers to meters

    peak_wavelengths_data = [] # To store (Temperature, Peak Wavelength) for Wien's Law plot

    plt.figure(figsize=(12, 7))
    for T in temperatures_K:
        if T <= 0:
            print(f"  Skipping temperature {T}K as it is non-positive.")
            continue
        radiance_series = [calculate_blackbody_spectral_radiance(wl_m, T) for wl_m in wavelengths_m]
        
        # Find the peak wavelength for the current temperature
        max_radiance_idx = np.argmax(radiance_series)
        peak_wl_m = wavelengths_m[max_radiance_idx]
        peak_wl_nm = wavelengths_nm[max_radiance_idx]
        peak_wavelengths_data.append((T, peak_wl_nm))

        plt.plot(wavelengths_nm, radiance_series, label=f'T = {T} K (Peak: {peak_wl_nm:.2f} nm)')
    
    plt.title('Blackbody Spectral Radiance (Planck\'s Law)')
    plt.xlabel('Wavelength (nm)')
    plt.ylabel('Spectral Radiance (W/(m³·sr))')
    plt.yscale('log') # Log scale helps visualize broad range of radiance values
    plt.xscale('log') # Log scale for wavelength can also be useful for broad ranges
    plt.grid(True, which="both", ls="--", alpha=0.7)
    plt.legend()
    plt.tight_layout()
    plt.show()

    print("\n--- Wien's Displacement Law Data Series ---")
    print("  This shows how the peak emission wavelength 'displaces' (shifts) to shorter wavelengths as temperature increases.")
    
    temp_list_sorted = [data[0] for data in sorted(peak_wavelengths_data)]
    peak_wl_list = [data[1] for data in sorted(peak_wavelengths_data)]
    
    for T, peak_wl_nm in sorted(peak_wavelengths_data):
        print(f"  Temperature: {T} K, Calculated Peak Wavelength: {peak_wl_nm:.2f} nm")
        
    plt.figure(figsize=(8, 6))
    plt.plot(temp_list_sorted, peak_wl_list, 'o-', label='Calculated Peak Wavelength')
    
    # Calculate and plot the theoretical curve from Wien's Displacement Law
    theoretical_peak_nm = (WIEN_DISPLACEMENT_CONSTANT / np.array(temp_list_sorted)) * 1e9
    plt.plot(temp_list_sorted, theoretical_peak_nm, 'r--', label='Theoretical (Wien\'s Law)')

    plt.title('Wien\'s Displacement Law: Peak Wavelength vs. Temperature')
    plt.xlabel('Temperature (K)')
    plt.ylabel('Peak Wavelength (nm)')
    plt.grid(True)
    plt.legend()
    plt.tight_layout()
    plt.show()

# --- Main Program Execution ---
if __name__ == "__main__":
    print("--- Radiation Analysis Program: Interpreting Data Behavior ---")
    print("\nThis program helps you understand how different scientific concepts can be analyzed using statistical ideas.")
    print("We'll explore two scenarios:")
    print("  1.  **'Nearest Neighbor Boundaries' in Gamma Curies:** This identifies points where the rate of radioactive decay is changing most rapidly, relative to measurement times that follow a bell curve.")
    print("  2.  **Blackbody Wavelength 'Displacement':** This demonstrates how the peak wavelength of light emitted by a hot object shifts as its temperature changes (Wien's Law).\n")
    print("Please note: The terms 'Nearest Neighbor Boundaries' and 'Edge States' are statistical interpretations here, not direct physical phenomena like in quantum mechanics.\n")

    while True:
        main_choice = input("Select: (N)earest Neighbor Boundaries (Gamma Curies), (B)lackbody Wavelength Displacement, or (Q)uit? ").strip().lower()

        if main_choice == 'q':
            break
        elif main_choice == 'n':
            print("\n--- 'Nearest Neighbor Boundaries' for Gamma Curies ---")
            isotope_name = input("  Enter radioactive isotope name (e.g., Cesium-137, Cobalt-60): ").strip()
            # Check if isotope data is available either manually or via PubChem
            if isotope_name not in MANUAL_GAMMA_DATA and _get_compound_cid_pubchem(isotope_name) is None:
                print(f"  '{isotope_name}' data not found. Please check spelling or add it to MANUAL_GAMMA_DATA.")
                continue
            
            try:
                initial_activity = float(input(f"  Enter initial activity of {isotope_name} in Becquerels (Bq): "))
                mean_time = float(input(f"  Enter mean measurement time for the time distribution (days): "))
                std_dev_time = float(input(f"  Enter standard deviation for the measurement time (days): "))
                num_points = int(input(f"  Enter number of data points for the simulation (e.g., 200): "))
                
                deriv_thresh_input = input(f"  Enter absolute derivative threshold for 'boundaries' (Ci/day, leave blank for auto-calc): ")
                derivative_threshold_abs = float(deriv_thresh_input) if deriv_thresh_input else None

            except ValueError:
                print("  Invalid input. Please enter numerical values.")
                continue
            
            analyze_gamma_curies_nearest_neighbor_boundaries(
                isotope_name, initial_activity, mean_time, std_dev_time, num_points, derivative_threshold_abs)
            
            time.sleep(1) # Small pause for readability

        elif main_choice == 'b':
            print("\n--- Blackbody Wavelength 'Displacement' (Wien's Law) ---")
            temp_input_str = input("  Enter temperatures (K) separated by commas (e.g., 300, 1000, 5000): ")
            try:
                temperatures_list = [float(t.strip()) for t in temp_input_str.split(',')]
            except ValueError:
                print("  Invalid temperature input. Please enter numbers separated by commas.")
                continue
            
            simulate_blackbody_wien_displacement(temperatures_list)
            
            time.sleep(1) # Small pause for readability

        else:
            print("Invalid choice. Please enter 'N', 'B', or 'Q'.")

    print("\nExiting program. Thank you for exploring these concepts!")

0xEB Enthalpy of Formation - Pubchem API

 

Okay, this is a much more chemically relevant challenge, but it still requires a clear definition of "chemical stability" that can be programmatically accessed or inferred from public data.

"Chemical stability" can mean several things:

  1. Thermodynamic Stability (relative to elements): Often quantified by the standard enthalpy of formation (). A more negative (lower) generally indicates a more thermodynamically stable compound relative to its constituent elements in their standard states.
  2. Thermodynamic Stability (relative to decomposition/reaction products): Requires comparing Gibbs free energies of various possible reactions, which is far too complex for a general PubChem API script.
  3. Kinetic Stability: How fast a compound decomposes or reacts. This involves activation energies and reaction mechanisms, almost never found directly in PubChem's property listings.
  4. Stability against specific conditions: E.g., hydrolytic stability, thermal stability (decomposition temperature). These might be mentioned qualitatively in PubChem.

For a practical demonstration using PubChem-like data retrieval, focusing on thermodynamic stability as indicated by the standard enthalpy of formation () is the most feasible and quantifiable approach.

Important Considerations and Limitations:

  • Data Availability: Similar to critical properties, standard enthalpy of formation () is not consistently available for all compounds in PubChem. PubChem is a broad chemical information database, not a specialized thermochemical one.
  • Preferred Data Source: For reliable thermochemical data, specialized databases like the NIST Chemistry WebBook are far superior. While this program will attempt to retrieve from PubChem, be aware of its limitations.
  • Units: Enthalpy of formation can be in kJ/mol or kcal/mol. We'll need to standardize (e.g., to kJ/mol).
  • Parsing Fragility: Extracting these values from PubChem's text-based sections will again rely on heuristic string parsing, which can be fragile.

Program Design:

  1. Input: The user will provide one or more compound names.
  2. PubChem Lookup (CID): Use pubchempy to get the CID.
  3. Fetch Enthalpy of Formation: A new function will query PubChem's pug_view endpoint for Delta Hf data.
  4. Stability Analysis: If data is found for multiple compounds, compare their values to determine relative stability.
  5. Error Handling: Gracefully handle missing data and API failures.

Refitted Program Code (Chemical Stability via Enthalpy of Formation)

Python
import numpy as np # Not strictly needed for this version, but good practice for scientific tools
import pubchempy as pcp
import requests
import json
import time

# --- PubChem Integration Functions ---
def get_compound_cid(compound_name):
    """Searches PubChem for a compound by name and returns its CID."""
    try:
        compounds = pcp.get_compounds(compound_name, 'name')
        if compounds:
            print(f"  Found '{compound_name}'. CID: {compounds[0].cid}")
            return compounds[0].cid
        else:
            print(f"  No compound found for '{compound_name}'.")
            return None
    except Exception as e:
        print(f"  Error getting CID for '{compound_name}': {e}")
        return None

def fetch_enthalpy_of_formation_from_pubchem(cid):
    """
    Fetches the Standard Enthalpy of Formation (ΔH_f°) from PubChem using PUG-REST.
    This property is often found in 'Chemical and Physical Properties' or
    'Depositor-Supplied' sections.
    """
    base_url = "https://pubchem.ncbi.nlm.nih.gov/rest/pug_view/data/compound/"
    url = f"{base_url}cid/{cid}/JSON/"
    enthalpy_of_formation = None
    unit = None

    print(f"  Attempting to fetch detailed data for CID {cid} from PubChem for ΔH_f°...")
    try:
        response = requests.get(url, timeout=30) # 30-second timeout
        response.raise_for_status() # Raise an exception for HTTP errors (4xx or 5xx)
        data = response.json()

        # Search for enthalpy of formation in common sections
        search_terms = ['enthalpy of formation', 'heat of formation', 'standard enthalpy of formation']

        for record in data.get('Record', {}).get('Section', []):
            if record.get('TOCHeading') in ['Chemical and Physical Properties', 'Depositor-Supplied Synonyms and Properties']:
                for prop_section in record.get('Section', []):
                    heading_lower = prop_section.get('TOCHeading', '').lower()
                    
                    found_term = False
                    for term in search_terms:
                        if term in heading_lower:
                            found_term = True
                            break
                    
                    if found_term:
                        for item in prop_section.get('Information', []):
                            for value_dict in item.get('Value', {}).get('StringWithMarkup', []):
                                text = value_dict.get('String')
                                if text:
                                    # Attempt to parse value and unit
                                    # This parsing is highly heuristic and may fail for varied formats
                                    text_lower = text.lower()
                                    try:
                                        if 'kj/mol' in text_lower:
                                            # Find the number before 'kJ/mol'
                                            val_str = text_lower.split('kj/mol')[0].strip().split(' ')[-1]
                                            enthalpy_of_formation = float(val_str)
                                            unit = 'kJ/mol'
                                            print(f"    Found ΔH_f°: {enthalpy_of_formation} {unit}")
                                            return enthalpy_of_formation, unit
                                        elif 'kcal/mol' in text_lower:
                                            val_str = text_lower.split('kcal/mol')[0].strip().split(' ')[-1]
                                            enthalpy_of_formation = float(val_str)
                                            unit = 'kcal/mol'
                                            print(f"    Found ΔH_f°: {enthalpy_of_formation} {unit}")
                                            return enthalpy_of_formation, unit
                                        # Add more unit parsing as needed
                                    except ValueError:
                                        pass # Failed to parse as float, continue searching

        print("  Could not find Standard Enthalpy of Formation in PubChem for this compound.")
        return None, None

    except requests.exceptions.RequestException as e:
        print(f"  Network or API error while fetching PubChem data: {e}")
        return None, None
    except json.JSONDecodeError as e:
        print(f"  Error parsing JSON from PubChem: {e}")
        return None, None
    except Exception as e:
        print(f"  An unexpected error occurred: {e}")
        return None, None

def convert_to_kj_per_mol(value, unit):
    """Converts enthalpy values to kJ/mol for consistent comparison."""
    if unit == 'kJ/mol':
        return value
    elif unit == 'kcal/mol':
        return value * 4.184 # 1 kcal = 4.184 kJ
    else:
        print(f"Warning: Unknown unit '{unit}'. Cannot convert.")
        return None

def analyze_stability(compounds_data):
    """
    Analyzes and compares the thermodynamic stability of compounds based on
    their standard enthalpy of formation (ΔH_f°).
    """
    if not compounds_data:
        print("\nNo data available for stability analysis.")
        return

    print("\n--- Chemical Stability Analysis (via Standard Enthalpy of Formation) ---")
    print("  Lower (more negative) ΔH_f° generally indicates greater thermodynamic stability")
    print("  relative to elements in their standard states.\n")

    valid_data = []
    for name, data in compounds_data.items():
        if data['enthalpy_kj_mol'] is not None:
            valid_data.append(data)
            print(f"  {name}: ΔH_f° = {data['enthalpy_kj_mol']:.2f} kJ/mol")
        else:
            print(f"  {name}: ΔH_f° data not found or invalid.")
    
    if not valid_data:
        print("  No valid enthalpy of formation data found for comparison.")
        return

    # Find the most stable compound(s)
    most_stable_value = min(d['enthalpy_kj_mol'] for d in valid_data)
    most_stable_compounds = [d['name'] for d in valid_data if d['enthalpy_kj_mol'] == most_stable_value]

    print("\n--- Analysis Result ---")
    if len(most_stable_compounds) == 1:
        print(f"  The most thermodynamically stable compound is: {most_stable_compounds[0]}")
    else:
        print(f"  The most thermodynamically stable compounds (with similar ΔH_f°) are: {', '.join(most_stable_compounds)}")
    print(f"  (ΔH_f°: {most_stable_value:.2f} kJ/mol)")
    print("-" * 60)


# --- Main execution loop ---
if __name__ == "__main__":
    compounds_to_analyze = {}

    while True:
        compound_name_input = input("\nEnter compound name to analyze (or 'done' to analyze current list, 'exit' to quit): ").strip()
        if compound_name_input.lower() == 'exit':
            break
        if compound_name_input.lower() == 'done':
            if compounds_to_analyze:
                analyze_stability(compounds_to_analyze)
                # Clear the list for a new analysis
                compounds_to_analyze = {} 
            else:
                print("No compounds entered for analysis yet.")
            continue # Continue to next prompt

        print(f"\n--- Searching for '{compound_name_input}' ---")
        cid = get_compound_cid(compound_name_input)

        if cid:
            time.sleep(0.1) # Small delay for API rate limits
            enthalpy, unit = fetch_enthalpy_of_formation_from_pubchem(cid)
            
            enthalpy_kj_mol = None
            if enthalpy is not None and unit is not None:
                enthalpy_kj_mol = convert_to_kj_per_mol(enthalpy, unit)
                if enthalpy_kj_mol is not None:
                    print(f"  {compound_name_input}: ΔH_f° = {enthalpy_kj_mol:.2f} kJ/mol (Original: {enthalpy} {unit})")
                    compounds_to_analyze[compound_name_input] = {
                        'name': compound_name_input,
                        'cid': cid,
                        'enthalpy_kj_mol': enthalpy_kj_mol
                    }
                else:
                    print(f"  Could not get valid ΔH_f° in kJ/mol for '{compound_name_input}'.")
            else:
                print(f"  ΔH_f° data not found for '{compound_name_input}'.")
        else:
            print(f"  Failed to find CID for '{compound_name_input}'.")

    print("\nExiting program.")


How to Run and Test:

  1. Save: Save the code as a Python file (e.g., chemical_stability.py).
  2. Install Libraries: If you haven't already, install the necessary libraries:
    Bash
    pip install numpy pubchempy requests
    
    (Note: numpy is included as a common scientific library, though not strictly required for this specific stability calculation).
  3. Execute: Run the script from your terminal:
    Bash
    python chemical_stability.py
    
  4. Enter Compound Names:
    • The program will prompt you to enter compound names one by one.
    • Good Candidates (likely to have ΔH_f°): Try "carbon dioxide", "methane", "water", "ethanol", "ammonia", "benzene". These are common compounds for which thermochemical data is usually available.
    • Less Common Compounds: For many arbitrary or complex compounds, you will likely find that is not available in PubChem. The program will report this.
    • Type done: After entering several compounds, type done to initiate the stability analysis and compare the compounds you've entered.
    • Type exit: To quit the program.

Explanation of Changes:

  1. fetch_enthalpy_of_formation_from_pubchem(cid):

    • This is the core new function.
    • It uses direct PUG-REST calls (pug_view) to retrieve the full JSON record.
    • It now specifically searches for common terms like "enthalpy of formation" or "heat of formation" within the 'Chemical and Physical Properties' and 'Depositor-Supplied' sections.
    • Heuristic Parsing: It attempts to extract a numerical value and a unit (e.g., "kJ/mol", "kcal/mol") from the unstructured text. This part is still highly heuristic and will fail if the text format is unexpected.
    • Returns the value and unit if found, otherwise None, None.
  2. convert_to_kj_per_mol(value, unit):

    • A utility function to standardize all fetched enthalpy values to kJ/mol, which is crucial for direct comparison.
  3. analyze_stability(compounds_data):

    • This function takes a dictionary of compounds and their fetched enthalpy data.
    • It filters for compounds where was successfully retrieved and converted.
    • It then finds the minimum (most negative) among the valid entries to identify the "most stable" compound(s).
    • Provides an interpretation that a lower means greater stability.
  4. Main Execution Loop (if __name__ == "__main__":):

    • Modified to allow the user to input multiple compound names.
    • Stores the retrieved enthalpy_kj_mol for each compound in a compounds_to_analyze dictionary.
    • When the user types done, it calls analyze_stability with the collected data.
    • Includes time.sleep(0.1) for API rate limiting.
    • Error handling for API calls and data parsing is maintained.

This program provides a solid foundation for assessing one aspect of chemical stability using publicly available data, while clearly highlighting the challenges and limitations of relying on general chemical information databases for specific thermodynamic properties.