========================== Zbb note questions for Per 23-09-05 ========================== 1. abstract, line 6: "the bb bkg is estimated using tag-rate functions derived in data". Is this still true, given the normalisation? 2. What are the excesses (in terms of significance) for each dataset now? 3. I need excesses and significance for each dataset - in partic, I don't have excess numbers for v13. 4. I replaced the previous lumi value for v13 from 41.2 to 44.0 pb-1. Is this right? 5. Do we really want to include Table 1 and all the MC samples we sort of used but didnt? 6. Do we really want to include Andys section? Yes - prob as a cross check, but we need to make sure they agree reasonably well. 7. Section 3, page 4: 2nd para: when will the STT be used? Is it being used in earnest now? 8. Do we want to specify in the note the top (say) 5 triggers for each dataset, or just the ORed combination that we used ultimately? Or both? Notes for me * Leave v13 trig description as is * Make trf vs dphi plot with only qcd and bb 20-40 right now * Look at v11 lumis * We observe a dphi dep of the trf. If there was no dep, the trf plot would be flat. (0 correction needed) But there is a dep, so the trf plot has a slope. (a correction needed) Now, if we apply the flat correction (extrapolated out of zone to in zone) then we find a larger systematic error than if we apply the correction in the inv mass plane. Essentially by fitting in the inv mass plane, the correction we apply is more accurate. We float the normalisation of the inv mass of the background to match the inv mass of the data above 140 GeV. When write this section, alert Per to it so he can have a look. * you are correct, the njet correction is not needed ( we only use 2 jet events for the TRF). In the latest analysis I replaced the dphi correction with a normalisation of the number of events above 140 GeV in the expected bkg.to what is observed in the double tagged data. This leads to a, somewhat, smaller syst. uncertainty than the dphi fit extrapolation. I still have one more thing to try to get the error down: Fit the inv. mass above 140 GeV to obtain the normalisation. Will probably not improve things by much but it's worth a shot. I will also try to fit or use a parametrized TRF for v11 and v13, because of the low stats. * have spent some time fitting the inv mass of the expected bkg to the double tag data above 140 GeV, hoping to improve the normalisation. I think I can bring the uncertainty down to 2 %. It also looks like the overall normalisation for v12 changes from 1.16 to 1.14 which leads to a larger excess. In fact if I now count the events in the bins between 60 and 90 GeV I have a 3 sigma excess taking both stats and 2% systematic uncertainty into account. So it looks promising. * I have also tried to fit the inv mass templates to the observed inv mass from the double tagged events and it works out rather nicely: /rooms/bar/imperial/jonsson/v13_zbb/pass2/invmass_template_fit.gif There seems to be a discrepancy of about 25% between data and expected signal from MC, but that is within the overall uncertainties. * v13 data cannot be used - essentially too little statistics. The problem is that the trf vs pt (of the light qcd) does not have enough statistics. For every event, we construct the trf vs et (pt) before we construct the overall trf and the initial trf is not defined sufficiently by enough events. Prob will ultimately combine the v12 and v11 stats. v13 triggers have been shown to be more efficient and useful and will be valuable in p17 data when we have more stats.