This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Last revision Both sides next revision | ||
chara:pavo_analysis_manual [2018/11/20 22:11] gail_stargazer |
chara:pavo_analysis_manual [2018/11/29 15:29] jones [l1_l2_gui.pro] |
||
---|---|---|---|
Line 3: | Line 3: | ||
==== Downloading the Pipeline ==== | ==== Downloading the Pipeline ==== | ||
- | The PAVO data reduction pipeline is available through the [[https:// | + | The PAVO data reduction pipeline is available through the [[https:// |
To download the PAVO software: | To download the PAVO software: | ||
Line 35: | Line 35: | ||
==== Data Analysis Strategy ==== | ==== Data Analysis Strategy ==== | ||
- | ==== PAVO data is not labelled by distinct dark, on-fringe and off-fringe files. Instead, a program | + | PAVO data is not labelled by distinct dark, on-fringe and off-fringe files. Instead, a program |
==== Fourier-Based Analysis ==== | ==== Fourier-Based Analysis ==== | ||
Line 64: | Line 64: | ||
* plot Always use this option when you are running a new data analysis. The plots will look like the following for a strong fringe on baseline 1, which uses beams 1 and 2: | * plot Always use this option when you are running a new data analysis. The plots will look like the following for a strong fringe on baseline 1, which uses beams 1 and 2: | ||
- | {{http:// | + | |
- | \\ | + | |
The outputs from processv2.pro are: | The outputs from processv2.pro are: | ||
Line 71: | Line 70: | ||
* The .pav files from individual analysis. These are IDL variable files that contain finely sampled V2 and closure-phase. | * The .pav files from individual analysis. These are IDL variable files that contain finely sampled V2 and closure-phase. | ||
* The log file, which contains the settings used for the out file. | * The log file, which contains the settings used for the out file. | ||
- | * If /plot is set, a screenshot of the output of the wavelength calibration will be created. A faulty wavelength calibration has strong potential to screw things up, and it should therefore be checked before proceeding in the analysis. Offsets and Rotation are listed in the log file: Offsets of /- 2 pixels are ok, everything higher should be suspicious. An example plot for an ok wavelength calibration with an offset of 1.2 can be found [[:chara:file_view_wavecal.gif_232656756_wavecal.gif|here]] (solid line = flux vs lambda after correcting for offset, dashed line = calibrated inflection point of PAVO filter). | + | * If /plot is set, a screenshot of the output of the wavelength calibration will be created. A faulty wavelength calibration has strong potential to screw things up, and it should therefore be checked before proceeding in the analysis. Offsets and Rotation are listed in the log file: Offsets of /- 2 pixels are ok, everything higher should be suspicious. An example plot for an ok wavelength calibration with an offset of 1.2 can be found {{:chara:files: |
==== ==== | ==== ==== | ||
Line 77: | Line 76: | ||
==== ==== | ==== ==== | ||
- | ==== l0_l1_gui.pro ==== | ||
- | |||
- | This program takes the .pav files and performs an automated outlier rejection as well as enables the user to manually reject bad sections of data. This program can only be run if /individual is set when running processv2.pro (i.e. .pav files have been created), which should really be the default option. \\ \\ Walkthrough: | ||
- | 4) Outlier rejection is based on 3 criteria: S/N, seconds after lock on fringes is lost (NSEC) and deviation from mean in sigma (SIGMA LIMIT); The default values for this should be fine in most cases; S/N cuts are probably the most sensible to be adjusted if data is particularly bad/good. \\ | ||
- | 5) The bottom row shows a range of diagnostics that can be used to determine the quality of data such as group delays, cart positions, V2C/V2 (measure of t0, good if high) and histograms. An example for a histogram display for 3-bl data can be found [[: | ||
- | 6) Once you're satisfied with the rejection settings for the scan press OUTPUT FILE, which will show V2 vs lambda using all frames that survived the outlier rejection. This step will create and entry in the output file // | ||
- | |||
- | ==== l1_l2.pro ==== | ||
- | |||
- | This program takes as input the out file from processv2.pro and the telescopes in use, and calibrates the visibility data. Some key options for this are: | ||
- | |||
- | * diamsfile This is an optional text file with two columns. The first is the star name (exactly as in processv2.pro) and the second is the angular diameter of the star. This feature hasn't really been tested, needs diameter errors and correct propagation (anyone?). | ||
- | * exp If you turn this feature on with /exp, the photometry is used to correct the visibilities for photometric fluctuations. Definitely at least try it for 3-telescope data. Sometimes, 2-telescope data seems to be more stable than the photometric measurements themselves: hence this is an option rather than the default. | ||
- | |||
- | ==== l1_l2_gui.pro ==== | ||
- | |||
- | A GUI extension of l1_l2.pro to calibrate data, including multi-bracket calibration and proper uncertainty calculations using Monte-Carlo simulations. This program can only be run with the output of l0_l1_gui.pro (see above). \\ \\ Walkthrough: | ||
- | 4) Once you have identified the brackets that should be used for calibration, | ||
- | 5) Load your config file using the LOAD CONFIG FILE button and press CALIBRATE to perform the multi-bracket calibration. A LD/UD will be performed automatically. Press OUTPUT FILES to output the data of this graph. \\ | ||
- | 6) Pressing RUN MC will start the Monte-Carlo simulations to calculate uncertainties of the UD fit to the data. The simulations include uncertainties in wavelength scale, calibrator diameters, measurement errors, limb-darkening coefficient and correlations between wavelength channels (see [[: | ||
- | \\ | ||
- | Note: /exp is default in this program as I've found it to improve the calibration in almost all cases. | ||
- | |||
- | ===== Nice features to add ===== | ||
- | |||
- | processv2: Knowledge of the telescopes from the start, i.e. finding if 2 or 3 telescopes were used based on the shutter sequences. \\ l1_l2_gui: Add more complex models such as binaries etc in fitting routine. | ||
- | |||
- | ==== Setting up IDL ==== | ||
- | |||
- | In addition to the PAVO code, you will need to download the Goddard astrolib library from [[http:// | ||
- | |||
- | - Put the astrolib library in ~/ | ||
- | - Assuming that you've done the CVS checkout above into ~/, use the following for your idl_startup.pro: | ||
- | |||
- | defsysv, ' | ||
- | |||
- | < | ||
- | !path = expand_path(' | ||
- | !path = expand_path(' | ||
- | </ | ||
- | |||
- | \\ Then everything should work. Alternatively, | ||
- | |||
- | ==== Data Analysis Strategy ==== | ||
- | |||
- | ==== PAVO data is not labelled by distinct dark, on-fringe and off-fringe files. Instead, a program hasto go through the file headers and make a mostly human-readable headstrip file. This is not asubstitute for good log taking. ==== | ||
- | |||
- | ==== Fourier-Based Analysis ==== | ||
- | |||
- | In this analysis, the first thing that happens is that each frame is turned into a data cube, with wavelength and 2D pupil position. Then each wavelength is analysed using a 2D Fourier transform, where the power is chopped out, the data inverse-transformed and the analysis done on the demodulated fringes in the pupil-plane. | ||
- | |||
- | ==== Demodulation (model-based) Analysis ==== | ||
- | |||
- | This is the method used to get group delays in get_gd.pro. There is some potential for this method to deliver improved results for the very faintest 3-telescope data. Consider this to be experimental only. If this is sped up in a later version of the code, the output quantities from processv2.prowill not have to be recalculated. The latest ideas (1 April 2010) are to use Graphics Processing Units (GPUs) in a collaboration with the school of IT at Sydney, with a timescale of late-2010, or sparse matrices. The prototype will be sparse matrices in IDL. If GPUs were to be used (1000 times faster than a PC), then we estimate 2 weeks to analyze all PAVO data taken thus far! | ||
- | |||
- | ===== Programs for Analysis ===== | ||
- | |||
- | ==== headstrip.pro ==== | ||
- | |||
- | Use this program like headstrip, / | ||
- | |||
- | ==== processor.pro ==== | ||
- | |||
- | This, or something you name yourself, is an example of how to script multiple data analysis runs.g. over a weekend. It is an excellent way to keep a diary of what was run before. NB //You either have to run processor.pro, | ||
- | |||
- | ==== processv2.pro ==== | ||
- | |||
- | This is the main program for analysis. Originally, it was only for V2 analysis. These options are all inputs to processv2.pro. | ||
- | |||
- | * nohann Usually, a window is applied in the Fourier domain. Setting this keyword means that a window isn't used. | ||
- | * lambda_smooth This is the very most important option. It specifies the number of n wavelength channels over which signal will be coherently smoothed over to increase S/N by sqrt(n). Can be used for data on faint stars but should be used with caution. At the moment it's probably best to not set lambda_smooth for your first analysis, and check the influence of lambda_smooth if you're not satisfied with the results. | ||
- | * individual Essential for rejecting bad data. | ||
- | * plot Always use this option when you are running a new data analysis. The plots will look like the following for a strong fringe on baseline 1, which uses beams 1 and 2: | ||
- | |||
- | {{http:// | ||
- | \\ | ||
- | The outputs from processv2.pro are: | ||
- | |||
- | * The out_file, which is the text file output, containing V2 values, estimated errors, closurephases etc. | ||
- | * The .pav files from individual analysis. These are IDL variable files that contain finely sampled V2 and closure-phase. | ||
- | * The log file, which contains the settings used for the out file. | ||
- | * If /plot is set, a screenshot of the output of the wavelength calibration will be created. A faulty wavelength calibration has strong potential to screw things up, and it should therefore be checked before proceeding in the analysis. Offsets and Rotation are listed in the log file: Offsets of +/- 2 pixels are ok, everything higher should be suspicious. An example plot for an ok wavelength calibration with an offset of 1.2 can be found [[: | ||
- | |||
- | ==== ==== | ||
- | |||
- | ==== ==== | ||
==== l0_l1_gui.pro ==== | ==== l0_l1_gui.pro ==== | ||
- | This program takes the .pav files and performs an automated outlier rejection as well as enables the user to manually reject bad sections of data. This program can only be run if /individual is set when running processv2.pro (i.e. .pav files have been created), which should really be the default option. \\ \\ Walkthrough: | + | This program takes the .pav files and performs an automated outlier rejection as well as enables the user to manually reject bad sections of data. This program can only be run if /individual is set when running processv2.pro (i.e. .pav files have been created), which should really be the default option. \\ \\ Walkthrough: |
4) Outlier rejection is based on 3 criteria: S/N, seconds after lock on fringes is lost (NSEC) and deviation from mean in sigma (SIGMA LIMIT); The default values for this should be fine in most cases; S/N cuts are probably the most sensible to be adjusted if data is particularly bad/good. \\ | 4) Outlier rejection is based on 3 criteria: S/N, seconds after lock on fringes is lost (NSEC) and deviation from mean in sigma (SIGMA LIMIT); The default values for this should be fine in most cases; S/N cuts are probably the most sensible to be adjusted if data is particularly bad/good. \\ | ||
- | 5) The bottom row shows a range of diagnostics that can be used to determine the quality of data such as group delays, cart positions, V2C/V2 (measure of t0, good if high) and histograms. An example for a histogram display for 3-bl data can be found [[:chara:file_view_l0l1_3bl.tiff_232655724_l0l1_3bl.tiff|here]]. The top right window will display the fraction of datapoints rejected with the current settings. \\ | + | 5) The bottom row shows a range of diagnostics that can be used to determine the quality of data such as group delays, cart positions, V2C/V2 (measure of t0, good if high) and histograms. An example for a histogram display for 3-bl data can be found {{:chara:files: |
6) Once you're satisfied with the rejection settings for the scan press OUTPUT FILE, which will show V2 vs lambda using all frames that survived the outlier rejection. This step will create and entry in the output file // | 6) Once you're satisfied with the rejection settings for the scan press OUTPUT FILE, which will show V2 vs lambda using all frames that survived the outlier rejection. This step will create and entry in the output file // | ||
Line 180: | Line 93: | ||
==== l1_l2_gui.pro ==== | ==== l1_l2_gui.pro ==== | ||
- | A GUI extension of l1_l2.pro to calibrate data, including multi-bracket calibration and proper uncertainty calculations using Monte-Carlo simulations. This program can only be run with the output of l0_l1_gui.pro (see above). \\ \\ Walkthrough: | + | A GUI extension of l1_l2.pro to calibrate data, including multi-bracket calibration and proper uncertainty calculations using Monte-Carlo simulations. This program can only be run with the output of l0_l1_gui.pro (see above). \\ \\ Walkthrough: |
4) Once you have identified the brackets that should be used for calibration, | 4) Once you have identified the brackets that should be used for calibration, | ||
5) Load your config file using the LOAD CONFIG FILE button and press CALIBRATE to perform the multi-bracket calibration. A LD/UD will be performed automatically. Press OUTPUT FILES to output the data of this graph. \\ | 5) Load your config file using the LOAD CONFIG FILE button and press CALIBRATE to perform the multi-bracket calibration. A LD/UD will be performed automatically. Press OUTPUT FILES to output the data of this graph. \\ | ||
- | 6) Pressing RUN MC will start the Monte-Carlo simulations to calculate uncertainties of the UD fit to the data. The simulations include uncertainties in wavelength scale, calibrator diameters, measurement errors, limb-darkening coefficient and correlations between wavelength channels (see [[:chara:file_view_l1_l2.tiff_232656708_l1_l2.tiff|here]] for an example output screenshot). This feature is so far only implemented for a simple UD fit. \\ | + | 6) Pressing RUN MC will start the Monte-Carlo simulations to calculate uncertainties of the UD fit to the data. The simulations include uncertainties in wavelength scale, calibrator diameters, measurement errors, limb-darkening coefficient and correlations between wavelength channels (see {{:chara:files:l1_l2.jpg? |
\\ | \\ | ||
Note: /exp is default in this program as I've found it to improve the calibration in almost all cases. | Note: /exp is default in this program as I've found it to improve the calibration in almost all cases. |