A BPP algorithm can be seen as a deterministic algorithm that takes a string of random bits as part of its input. One way to derandomize a BPP algorithm is to run it with every possible random string and pick the answer that the majority of random strings lead to. This blows up the running time by 2^r with r being the length of the random string, making the running time exponential.
If the PRNG can take a seed of length log(n) and generate a random string from it, then you might try to take a BBP algorithm and reduce the length of the random input to log(n) using the PRNG. We assume that the PRNG being good enough means that this new algorithm still has the usual BBP guarantee that it gives the correct answer for at least 2/3 of the seeds.
However, now the derandomization of running it on every possible seed just gives a slowdown of 2^log(n) = n, which means that the running time stays polynomial.
Are there problems in BPP that can defend against "helping" PRNGs by diagonalizing against them somehow?