Dear all,
My talk at Mt Stromlo went very well.
Brian Schmidt of supernovae fame made a good remark. Instead of looking for peaks, which could simply require that the xi(r)'s in different z bins are similar (he thought of using a maximum likelihood technique). He was referring to Alcock & Paczynski...
Matthew Colless mentioned that both 2dF galaxies (2dFGRS) and quasars (2QZ) show a feature in P(k) at 2pi/89 h Mpc-1.
Matthew's postdoc, Roberto de Propris, showed me a plot of the correlation function of the superposition of many pencil beams drawn through the 2dFGRS. He has access to the full 250k sample, and avoided computing a complete xi(r) using all 30 billion separations. There is a feature around 250 Mpc, but there are also stronger features elsewhere, but not at 130 h-1 Mpc. It thus appears that Broadhurst et al. were lucky back in 1990.
cheers
Gary
Cze�� wszystkim,
On Tue, 4 Jun 2002, Gary Mamon wrote:
Dear all,
My talk at Mt Stromlo went very well.
Brian Schmidt of supernovae fame made a good remark. Instead of looking
I met Brian on my last visit to Stromlo - and hopefully convinced him to drop the word "global" from his description of his work on *local* cosmological parameters...
for peaks, which could simply require that the xi(r)'s in different z bins are similar (he thought of using a maximum likelihood technique). He was
Micha� Fr�ckowiak had the same idea last Friday :) - he suggested multiplying xi_1 xi_2 and dividing by the uncertainties, also abs(xi_1-xi_2). Given the different amplitudes at different redshifts, it seems that xi_1 xi_2 is the most reasonable.
referring to Alcock & Paczynski...
Matthew Colless mentioned that both 2dF galaxies (2dFGRS) and quasars (2QZ) show a feature in P(k) at 2pi/89 h Mpc-1.
Matthew's postdoc, Roberto de Propris, showed me a plot of the correlation function of the superposition of many pencil beams drawn through the 2dFGRS. He has access to the full 250k sample, and avoided computing a complete xi(r) using all 30 billion separations. There is a feature around 250 Mpc, but there are also stronger features elsewhere, but not at 130 h-1 Mpc. It thus appears that Broadhurst et al. were lucky back in 1990.
Well, I still think the 2dFGRS has much too small a volume to be able to say anything as significant as the 2QZ, which covers a much larger volume.
Na ra�e Boud
Hi!
I am sending a short theory behind max. likehood method and its adaptation in comparing curves with given errors. I believe this will be helpfull - it seems to me as the best way that does the trick: - it takes particular errors into account - lets you estimate errors of the fit - as well as contours of confidence when making a grid. This is well tested in my own soft so I hope should work in this case as well. In case of questions - I would be even more than glad to help.
regards Michal
Thanks Micha�,
On 3 Jun 2002, Michal Frackowiak wrote:
I am sending a short theory behind max. likehood method and its adaptation in comparing curves with given errors. I believe this will be helpfull - it seems to me as the best way that does the trick:
- it takes particular errors into account
- lets you estimate errors of the fit
- as well as contours of confidence when making a grid.
It's a nice explanation.
This is well tested in my own soft so I hope should work in this case as well. In case of questions - I would be even more than glad to help.
Well, although it's clear the method can give a result, for it to give a correct result, the different r values would need to be independent from one another.
So I see three problems, 2 easily solvable, 1 more fundamental:
- solvable: (1) If we combine all three: L_{12} L_{23} L_{13} then one of these three is dependent on the other two. So it seems to me that we have to (arbitrarily) remove one of the three, even though it's clear that this is an arbitrary choice.
(2) There is some smoothing in the curves output by DEplotcorrnall. This can be removed (just set ismoo=0 on line 283 of DEplotcorrnall in DE-V0.04, I think this should be OK), but then my worry is that the result will be extremely noisy. An alternative solution would be to to only test one out of every (ismoo+1) values of r .
- fundamental: (3) Different bins in a correlation function depend on one another. A single quasar is a member of many pairs, and different pairs fall into different bins. So the different r values in a single function zeta(r) depend on one another.
So for this reason I find it hard to believe that L (rescaled) would be a true probability density function.
It's certainly a good idea, so I'll put it as one of the DEplot_cf tests, but I don't think it'll give true error bars.
Cze�� Boud
On Wed, 2002-06-05 at 13:08, Boud Roukema wrote:
Thanks Michał,
On 3 Jun 2002, Michal Frackowiak wrote:
I am sending a short theory behind max. likehood method and its adaptation in comparing curves with given errors. I believe this will be helpfull - it seems to me as the best way that does the trick:
- it takes particular errors into account
- lets you estimate errors of the fit
- as well as contours of confidence when making a grid.
It's a nice explanation.
This is well tested in my own soft so I hope should work in this case as well. In case of questions - I would be even more than glad to help.
Well, although it's clear the method can give a result, for it to give a correct result, the different r values would need to be independent from one another.
So I see three problems, 2 easily solvable, 1 more fundamental:
- solvable:
(1) If we combine all three: L_{12} L_{23} L_{13} then one of these three is dependent on the other two. So it seems to me that we have to (arbitrarily) remove one of the three, even though it's clear that this is an arbitrary choice.
Not exactly. If you remove one of them, imagine the situation: you have 2 curves (A and B) almost identical and 1 (C) very different. Now if you calculate L wita A-B and A-C you get nothing from the first pair - they are identical - but much from B-C. BUT if you chose to comare A-C (much difference) and B-C (also much difference) you get useless info. That is why you have to compare each witch each.
(2) There is some smoothing in the curves output by DEplotcorrnall. This can be removed (just set ismoo=0 on line 283 of DEplotcorrnall in DE-V0.04, I think this should be OK), but then my worry is that the result will be extremely noisy. An alternative solution would be to to only test one out of every (ismoo+1) values of r .
In eqs. (4) and (5) we should then replace the sum with an integral - that would be of course more natural. I have ommited it.
- fundamental:
(3) Different bins in a correlation function depend on one another. A single quasar is a member of many pairs, and different pairs fall into different bins. So the different r values in a single function zeta(r) depend on one another.
So for this reason I find it hard to believe that L (rescaled) would be a true probability density function.
I do not think it willd be a problem - since you rescale it properly. I will think about it, but the fact that the values are somehow corelated should not affect the method - you can estimate errors for any r as I understand and the method is just to compare curves!
It's certainly a good idea, so I'll put it as one of the DEplot_cf tests, but I don't think it'll give true error bars.
I think it will. For error bars for the best fit you can calculate the second derivative matrix for l({parameters}), than inverse it - and you get covariance matrix! From that point elements \sqrt{c_{ii}} will give you errors of 1-sigma for i-th parameter. That's a method for estimating errors of minimising method. It works for me. And agrees with plotted contours.
regards Michal