Skip to content

TestMLDA.test_aem_mu_sigma is failing #5099

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
twiecki opened this issue Oct 25, 2021 · 3 comments · Fixed by #5104
Closed

TestMLDA.test_aem_mu_sigma is failing #5099

twiecki opened this issue Oct 25, 2021 · 3 comments · Fixed by #5104
Assignees
Labels
Milestone

Comments

@twiecki
Copy link
Member

twiecki commented Oct 25, 2021

=================================== FAILURES ===================================
1984
__________________________ TestMLDA.test_aem_mu_sigma __________________________
1985

1986
self = <pymc.tests.test_step.TestMLDA object at 0x7f0d8c30bad0>
1987

1988
    def test_aem_mu_sigma(self):
1989
        """Test that AEM estimates mu_B and Sigma_B in
1990
        the coarse models of a 3-level LR example correctly"""
1991
        # create data for linear regression
1992
        if aesara.config.floatX == "float32":
1993
            p = "float32"
1994
        else:
1995
            p = "float64"
1996
        np.random.seed(123456)
1997
        size = 200
1998
        true_intercept = 1
1999
        true_slope = 2
2000
        sigma = 1
2001
        x = np.linspace(0, 1, size, dtype=p)
2002
        # y = a + b*x
2003
        true_regression_line = true_intercept + true_slope * x
2004
        # add noise
2005
        y = true_regression_line + np.random.normal(0, sigma ** 2, size)
2006
        s = np.identity(y.shape[0], dtype=p)
2007
        np.fill_diagonal(s, sigma ** 2)
2008
    
2009
        # forward model Op - here, just the regression equation
2010
        class ForwardModel(Op):
2011
            if aesara.config.floatX == "float32":
2012
                itypes = [at.fvector]
2013
                otypes = [at.fvector]
2014
            else:
2015
                itypes = [at.dvector]
2016
                otypes = [at.dvector]
2017
    
2018
            def __init__(self, x, pymc_model):
2019
                self.x = x
2020
                self.pymc_model = pymc_model
2021
    
2022
            def perform(self, node, inputs, outputs):
2023
                intercept = inputs[0][0]
2024
                x_coeff = inputs[0][1]
2025
    
2026
                temp = intercept + x_coeff * x + self.pymc_model.bias.get_value()
2027
                with self.pymc_model:
2028
                    set_data({"model_output": temp})
2029
                outputs[0][0] = np.array(temp)
2030
    
2031
        # create the coarse models with separate biases
2032
        mout = []
2033
        coarse_models = []
2034
    
2035
        with Model() as coarse_model_0:
2036
            bias = Data("bias", 3.5 * np.ones(y.shape, dtype=p))
2037
            mu_B = Data("mu_B", -1.3 * np.ones(y.shape, dtype=p))
2038
            Sigma_B = Data("Sigma_B", np.zeros((y.shape[0], y.shape[0]), dtype=p))
2039
            model_output = Data("model_output", np.zeros(y.shape, dtype=p))
2040
            Sigma_e = Data("Sigma_e", s)
2041
    
2042
            # Define priors
2043
            intercept = Normal("Intercept", 0, sigma=20)
2044
            x_coeff = Normal("x", 0, sigma=20)
2045
    
2046
            theta = at.as_tensor_variable([intercept, x_coeff])
2047
    
2048
            mout.append(ForwardModel(x, coarse_model_0))
2049
    
2050
            # Define likelihood
2051
            likelihood = MvNormal("y", mu=mout[0](theta) + mu_B, cov=Sigma_e, observed=y)
2052
    
2053
            coarse_models.append(coarse_model_0)
2054
    
2055
        with Model() as coarse_model_1:
2056
            bias = Data("bias", 2.2 * np.ones(y.shape, dtype=p))
2057
            mu_B = Data("mu_B", -2.2 * np.ones(y.shape, dtype=p))
2058
            Sigma_B = Data("Sigma_B", np.zeros((y.shape[0], y.shape[0]), dtype=p))
2059
            model_output = Data("model_output", np.zeros(y.shape, dtype=p))
2060
            Sigma_e = Data("Sigma_e", s)
2061
    
2062
            # Define priors
2063
            intercept = Normal("Intercept", 0, sigma=20)
2064
            x_coeff = Normal("x", 0, sigma=20)
2065
    
2066
            theta = at.as_tensor_variable([intercept, x_coeff])
2067
    
2068
            mout.append(ForwardModel(x, coarse_model_1))
2069
    
2070
            # Define likelihood
2071
            likelihood = MvNormal("y", mu=mout[1](theta) + mu_B, cov=Sigma_e, observed=y)
2072
    
2073
            coarse_models.append(coarse_model_1)
2074
    
2075
        # fine model and inference
2076
        with Model() as model:
2077
            bias = Data("bias", np.zeros(y.shape, dtype=p))
2078
            model_output = Data("model_output", np.zeros(y.shape, dtype=p))
2079
            Sigma_e = Data("Sigma_e", s)
2080
    
2081
            # Define priors
2082
            intercept = Normal("Intercept", 0, sigma=20)
2083
            x_coeff = Normal("x", 0, sigma=20)
2084
    
2085
            theta = at.as_tensor_variable([intercept, x_coeff])
2086
    
2087
            mout.append(ForwardModel(x, model))
2088
    
2089
            # Define likelihood
2090
            likelihood = MvNormal("y", mu=mout[-1](theta), cov=Sigma_e, observed=y)
2091
    
2092
            step_mlda = MLDA(coarse_models=coarse_models, adaptive_error_model=True)
2093
    
2094
            trace_mlda = sample(
2095
                draws=100,
2096
                step=step_mlda,
2097
                chains=1,
2098
                tune=200,
2099
                discard_tuned_samples=True,
2100
                random_seed=84759238,
2101
            )
2102
    
2103
            m0 = step_mlda.step_method_below.model_below.mu_B.get_value()
2104
            s0 = step_mlda.step_method_below.model_below.Sigma_B.get_value()
2105
            m1 = step_mlda.model_below.mu_B.get_value()
2106
            s1 = step_mlda.model_below.Sigma_B.get_value()
2107
    
2108
>           assert np.allclose(m0, -3.5)
2109
E           assert False
2110
E            +  where False = <function allclose at 0x7f0df851a4d0>(array([-3.487413 , -3.4893727, -3.4913301, -3.493289 , -3.495273 ,\n       -3.49723  , -3.49919  , -3.5011551, -3.50311...44214, -3.8663938, -3.8683414,\n       -3.8703194, -3.8722787, -3.874239 , -3.876195 , -3.8781743],\n      dtype=float32), -3.5)
2111
E            +    where <function allclose at 0x7f0df851a4d0> = np.allclose
2112
@ricardoV94
Copy link
Member

ricardoV94 commented Oct 25, 2021

Might be related to #4887, as I think the two PRs happened relatively asynchronously

@mikkelbue
Copy link
Contributor

I don't think this is a flaky test, it's that the error model relies on the Metropolis proposal being evaluated last by the Aesara Op. This might be a result of the changes in the handling of logp, as mentioned by Ricardo.

I am working on it.

@ricardoV94 ricardoV94 added this to the v4.0.0-beta2 milestone Oct 26, 2021
@ricardoV94 ricardoV94 changed the title Flaky MLDA test? TestMLDA.test_aem_mu_sigma is failing Oct 26, 2021
@mikkelbue
Copy link
Contributor

This is fixed in #5104

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants