Thursday, May 11, 2006

test post

Testing


Tag:

Wednesday, March 22, 2006

Re: st: RE: RE: RE: list subjects with a similar value

>Personally if you want to be able to revert to older versions of files >I'd recommend simply creating a copy before doing major revisions and >simply append a date in numeric format at some point to the filename >(before '.' would be most appropriate).

Thank you Neil. Are there resources for systematic and efficient "best practices" in do-file colding, that beginning Stata users can emulate and integrate into their own work? Michael

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

st: simultaneous equation with qualitative variables

how can i use stata to estimate simultaneous equations with qualitative variables as follows: y1=a1+b1y2+c1x1+e1 y2=a2+b2y1+c2x2+e2 y1*=1 if y1>0 y1*=0 if y1<=0 y2*=1 if y2>0 y2*=0 if y2<=0

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

RE: st: RE: RE: RE: list subjects with a similar value

MS-Word I understand to be a word processor. Recall that many members of Statalist do not use and certainly are not expert in Windows or Microsoft products generally.

In Vim, the way that seems to be most natural is a Unix way. You can have two files open in two windows and set it so that differences are highlighted. So, one could be a previous version and the other a working version.

I gather that Word behaves differently.

In general, good text editors will have something loosely similar. None that I know of regards it as a virtue to emulate Word.

Nick n.j.cox@durham.ac.uk

Michael McCulloch > Sorry; I was referring to changes in coding that one writes > in a do-file. > >Changes to what?

> > > Jennifer response brings to mind a question that recently > > > occurred to me: > > > Is there a Stata-compatible text editor that, like MS-WORD, > > > can highlight changes? > > > > > >Whoops, of course you're right. When I play with things to > > > work out code > > > >I usually don't keep the changes, so I mangle working datasets > > > >willy-nilly and didn't think to change the conditioned keep to a > > > >conditioned list. > > > >The use of bysorting and _N is much neater and more flexible.

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

Re: st: RE: RE: RE: list subjects with a similar value

On 3/23/06, Michael McCulloch <mm@pinest.org> wrote: > Jennifer response brings to mind a question that recently occurred to me: > Is there a Stata-compatible text editor that, like MS-WORD, can highlight > changes?

Although I don't use it myself there are version control features in Emacs. These work with programming control systems such as CVS or SVN, and I'm not sure what your mileage would be writing do/ado files under such schemas, but I suspect it is possible (don't quote me on that though :).

This isn't exactly the same as M$-words document tracking (which personally I find hideously hard to follow, particularly when there are multiple authors making revisions, I've seen some docs that end up looking like the old TV-test screens :-), but it does allow you to track the changes that you are making. One of the main problems (as I see it) is that to write do/ado-files you need a _text_ editor, and word is not a text-editor, but a word-processor, so all the colour changes that you see are essentially mark-ups of the original text, and such mark-ups would render the Stata code uninterpretable.

See http://www.gnu.org/software/emacs/manual/html_node/Version-Control.html for more on Emacs' VC system.

Personally if you want to be able to revert to older versions of files I'd recommend simply creating a copy before doing major revisions and simply append a date in numeric format at some point to the filename (before '.' would be most appropriate).

HTH's

Neil -- "The best safety device in climbing is always situated between your ears" - Ross Weiter, Perth Rock Climbing Guide (2002)

Email - nshephard@gmail.com / neilshep@cyllene.uwa.edu.au Website - http://slack.ser.man.ac.uk/ Blog - http://slack---line.blogspot.com/ Flickr - http://www.flickr.com/photos/slackline/

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

RE: st: RE: RE: RE: list subjects with a similar value

Sorry; I was referring to changes in coding that one writes in a do-file.

At 04:47 PM 3/22/2006, you wrote: >Changes to what? > >Nick >n.j.cox@durham.ac.uk > >Michael McCulloch > > > Jennifer response brings to mind a question that recently > > occurred to me: > > Is there a Stata-compatible text editor that, like MS-WORD, > > can highlight changes? > > > >Whoops, of course you're right. When I play with things to > > work out code > > >I usually don't keep the changes, so I mangle working datasets > > >willy-nilly and didn't think to change the conditioned keep to a > > >conditioned list. > > >The use of bysorting and _N is much neater and more flexible. > >* >* For searches and help try: >* http://www.stata.com/support/faqs/res/findit.html >* http://www.stata.com/support/statalist/faq >* http://www.ats.ucla.edu/stat/stata/

Best wishes, Michael

____________________________________

Michael McCulloch Pine Street Clinic 124 Pine Street, San Anselmo, CA 94960-2674 tel 415.407.1357 fax 415.485.1065 email: mm@pinest.org web: www.pinest.org www.pinestreetfoundation.org

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

RE: st: RE: RE: RE: list subjects with a similar value

Changes to what?

Nick n.j.cox@durham.ac.uk

Michael McCulloch > Jennifer response brings to mind a question that recently > occurred to me: > Is there a Stata-compatible text editor that, like MS-WORD, > can highlight changes? > >Whoops, of course you're right. When I play with things to > work out code > >I usually don't keep the changes, so I mangle working datasets > >willy-nilly and didn't think to change the conditioned keep to a > >conditioned list. > >The use of bysorting and _N is much neater and more flexible.

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

st: RE: RE: RE: Macro display format

It's a good question.

I guess: -scatter- calls -graph- calls ... something that clears your r-class results.

-graph- needs all sorts of little calculations to work out what to show. It tends to do this on the fly, but either way, you can lose your r-class results.

Whatever it is, it is quite deep down.

Nick n.j.cox@durham.ac.uk

Alex Ogan

> Sorry if this is a silly question. > > I did the following sequence of commands. I closed the > scatter as soon > as it opened. No other commands. > > Why does r(mean) go away after you use it in the scatter with the > formatting extended function? > > . sysuse auto, clear > (1978 Automobile Data) > > . quietly summ length > > . di `r(mean)' > 187.93243 > > . di `: di %9.1f `r(mean)'' > 187.9 > > . di `: di %9.1f `r(mean)'' > 187.9 > > . twoway scatter length mpg, text(200 35 "The mean of length is:`: di > %2.1f `r(mean)''") > > . di `: di %9.1f `r(mean)'' > > > . di `r(mean)' > > > . >

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

st: "sureg" with long-format data

Hello all,

I have searched the web and available Stata faqs and helpfiles for an answer to an sureg question and am unable to find one on my own. Much obliged if anyone can help me through this particular puzzle.

I have a dataset oriented lengthwise where observations are grouped by country and year. I would like to run sureg using this vertical orientation. For example, in the sureg syntax, I would like to write "sureg (yvar xvar)" as equation 1 representing the first 35 observations, followed by "(yvar xvar)" as equation 2 representing the next 35 observations, and so on. My difficulty is that without re-orienting the data width-wise (which creates a cumbersome heap of newly-named X and Y variables and stops me from being able to tweak the regressions around the edges) I have no way to distinguish for Stata that instead of each equation having different Y and X variable names, each equation has the same variable names but should contain a different set of 35 observations. My abbreviated dataset looks like:

Country Year Yvar Xvar Australia 2004 .5 14 Australia 2003 .6 17 .... .. Australia 1970 .2 10

Austria 2004 .9 35 Austria 2003 .8 14 .... .. Austria 1970 .15 14

Belgium 2004 .13 7 .... (and so on for approximately 100 countries)

Ideally, the sureg command would allow something like "by country: sureg (Yvar Xvar)" but instead, I have to reorient the data widthwise and write "sureg (YvarAustralia XvarAustralia) (YvarAustria XvarAustria)..." for about one-hundred different equations (and change all 100 when I make adjustments). Requiring this horizontal orientation to make sureg work seems remarkably inefficient in the command line, but I can't seem to find another way around it. Many thanks for any and all ideas.

Cheers, Eric

Eric K. Bielke Regulatory Economics Advisor www.telecom.co.nz Level 2, Telecom House, 68-86 Jervois Quay, P O Box 570, Wellington, New Zealand

This communication, including any attachments, is confidential. If you are not the intended recipient, you should not read it - please contact me immediately, destroy it, and do not copy or use any part of this communication or disclose anything about it. Thank you. Please note that this communication does not designate an information system for the purposes of the Electronic Transactions Act 2002.

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

st: RE: RE: Macro display format

Sorry if this is a silly question.

I did the following sequence of commands. I closed the scatter as soon as it opened. No other commands.

Why does r(mean) go away after you use it in the scatter with the formatting extended function?

. sysuse auto, clear (1978 Automobile Data)

. quietly summ length

. di `r(mean)' 187.93243

. di `: di %9.1f `r(mean)'' 187.9

. di `: di %9.1f `r(mean)'' 187.9

. twoway scatter length mpg, text(200 35 "The mean of length is:`: di %2.1f `r(mean)''")

. di `: di %9.1f `r(mean)''

. di `r(mean)'

.

-----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Nick Cox Sent: Wednesday, March 22, 2006 5:51 PM To: statalist@hsphsun2.harvard.edu Subject: st: RE: Macro display format

No. You should try

` : di %9.1f `r(mean)''

See help extended_fcn.

Tip: I would go `: di %2.1f `r(mean)'' even if you are sure that format is too restrictive. You're likely to be wrong, as Stata will stretch the space to avoid damage. However, with %9.1f you are likely to get the ugly spaces that are a consequence of what you asked for.

Nick n.j.cox@durham.ac.uk

Thomas Speidel > I am trying to include the content of a macro within a graph, but I'm > having problems with the display format. > > For example: > > sysuse auto, clear > qui: summ length > twoway scatter length mpg, text(200 35 "The mean of length is: > `r(mean)'") > > How do I change the format of the macro to display something > like %9.2f? > > I tried: > ... , text(200 35 "The mean of length is: `%9.1f `r(mean)''") > > Am I missing some triple compound quote? :-)

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/

This message is intended solely for the designated recipient(s). It may contain confidential or proprietary information and may be subject to confidentiality protections. If you are not a designated recipient, you may not review, copy, or distribute this message. If you receive this in error, please notify the sender by reply e-mail and delete this message.

Arrowstreet Capital, L.P. is an Equal Opportunity Employer.

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

st: RE: Macro display format

No. You should try

` : di %9.1f `r(mean)''

See help extended_fcn.

Tip: I would go `: di %2.1f `r(mean)'' even if you are sure that format is too restrictive. You're likely to be wrong, as Stata will stretch the space to avoid damage. However, with %9.1f you are likely to get the ugly spaces that are a consequence of what you asked for.

Nick n.j.cox@durham.ac.uk

Thomas Speidel > I am trying to include the content of a macro within a graph, but I'm > having problems with the display format. > > For example: > > sysuse auto, clear > qui: summ length > twoway scatter length mpg, text(200 35 "The mean of length is: > `r(mean)'") > > How do I change the format of the macro to display something > like %9.2f? > > I tried: > ... , text(200 35 "The mean of length is: `%9.1f `r(mean)''") > > Am I missing some triple compound quote? :-)

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

st: RE: RE: RE: list subjects with a similar value

Whoops, of course you're right. When I play with things to work out code I usually don't keep the changes, so I mangle working datasets willy-nilly and didn't think to change the conditioned keep to a conditioned list. The use of bysorting and _N is much neater and more flexible.

N.J.Cox wrote: >Whoa! The question was just about _listing_. >You just changed Raoul's dataset by throwing >much of it away.

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

st: Macro display format

I am trying to include the content of a macro within a graph, but I'm having problems with the display format.

For example:

sysuse auto, clear qui: summ length twoway scatter length mpg, text(200 35 "The mean of length is: `r(mean)'")

How do I change the format of the macro to display something like %9.2f?

I tried: ... , text(200 35 "The mean of length is: `%9.1f `r(mean)''")

Am I missing some triple compound quote? :-)

Thanks, Thomas

-- Thomas Speidel Statistical Associate Clinical Trials Unit Tom Baker Cancer Centre 1331 - 29th Street N.W. Calgary, AB, T2N 4N4

Tel. (403) 521-3370 Email: thomassp@cancerboard.ab.ca

This e-mail and any attachments may contain confidential and privileged information. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this e-mail and destroy any copies. Any dissemination or use of this information by a person other than the intended recipient is unauthorized and may be illegal.

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

RE: st: list subjects with a similar value

You are avoiding the command -duplicates- and doing it from first principles. That is a very good idea. -duplicates- is just a wrapper for stuff like this.

But the three steps here can be cut to two.

bysort date_of_birth : gen dob_duplicate = _N list id date_of_birth if dob_duplicate >= 2

Nick n.j.cox@durham.ac.uk

clinton.thompson@summitllc.us > there may be a more elegant way to do this, albeit this is > but one attempt: > > * obtain the number of duplicates w/in date of birth > bysort date_of_birth: gen dob_duplicate = _N > * tag each combination of DOB & the duplicates therein > egen dob_tag = tag(date_of_birth dob_duplicate) > * list the ID & DOB associated w/ each repeated DOB... > list id date_of_birth dob_tag if dob_tag > > I have a large database and would like to list the idnumber of all > > subjects with the same date of birth. How do I do this? I have tried > > .duplicate, but can figure out how to do it. Thanks.

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

st: RE: RE: list subjects with a similar value

Whoa! The question was just about _listing_. You just changed Raoul's dataset by throwing much of it away.

duplicates tag dateofbirth, gen(tag) sort dateofbirth list id dateofbirth if tag

Nick n.j.cox@durham.ac.uk

Marino, Jennifer > Maybe try something along these lines: > > duplicates tag dateofbirth, gen(tag) > drop if tag==0 > sort dateofbirth > by dateofbirth: list id

Raoul C Reulen > I have a large database and would like to list the idnumber of all > subjects with the same date of birth. How do I do this? I have tried > .duplicate, but can figure out how to do it. Thanks.

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

Re: st: list subjects with a similar value

there may be a more elegant way to do this, albeit this is but one attempt:

* obtain the number of duplicates w/in date of birth bysort date_of_birth: gen dob_duplicate = _N * tag each combination of DOB & the duplicates therein egen dob_tag = tag(date_of_birth dob_duplicate) * list the ID & DOB associated w/ each repeated DOB... list id date_of_birth dob_tag if dob_tag

note that i didn't subject this to a rigorous test...but i think it works, nonetheless. --clint

> Hi, > > I have a large database and would like to list the idnumber of all > subjects with the same date of birth. How do I do this? I have tried > .duplicate, but can figure out how to do it. Thanks. > > Raoul > > * > * For searches and help try: > * http://www.stata.com/support/faqs/res/findit.html > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ >

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

st: RE: list subjects with a similar value

Maybe try something along these lines:

duplicates tag dateofbirth, gen(tag) drop if tag==0 sort dateofbirth by dateofbirth: list id

Jen Marino

-----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Raoul C Reulen Sent: Wednesday, March 22, 2006 2:14 PM To: statalist@hsphsun2.harvard.edu Subject: st: list subjects with a similar value

Hi, I have a large database and would like to list the idnumber of all subjects with the same date of birth. How do I do this? I have tried .duplicate, but can figure out how to do it. Thanks. Raoul

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

st: list subjects with a similar value

Hi, I have a large database and would like to list the idnumber of all subjects with the same date of birth. How do I do this? I have tried .duplicate, but can figure out how to do it. Thanks. Raoul

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

Re: st: monte carlo study

Rudy,

here is an example based on the help file for -simulate-

------------------------- capture program drop mcarlols program define mcarlols, rclass syntax [, obs(integer 1) ] drop _all set obs `obs' drawnorm eps x g y=1+x+eps reg y x return scalar beta = _b[x] end

simulate beta=r(beta) , reps(1000): mcarlols, obs(100)

su beta, de kdensity beta, norm --------------------------------

le 22/03/2006 20:10, Rudy Fichtenbaum a ecrit : > Stata Users: > > I am still learning my way around Stata after many years of using SAS. > In SAS it is fairly easy to write a program to do a simple Monte Carlo > study to illustrate the properties of least squares estimators. > > Is there anyone that has a simple example of a Monte Carlo Study for OLS? > > Thanks, > > Rudy > * > * For searches and help try: > * http://www.stata.com/support/faqs/res/findit.html > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ >

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

Re: st: monte carlo study

In the unlikely event that you have not, do check out -findit monte carlo-. The example in STB reprints Vol 4, pg 207 may be helpful

TITLE STB-20 ssi6. Simplified Monte Carlo simulations.

Rudy Fichtenbaum wrote: > Stata Users: > > I am still learning my way around Stata after many years of using SAS. > In SAS it is fairly easy to write a program to do a simple Monte Carlo > study to illustrate the properties of least squares estimators. > > Is there anyone that has a simple example of a Monte Carlo Study for OLS? > > Thanks, > > Rudy > * > * For searches and help try: > * http://www.stata.com/support/faqs/res/findit.html > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ > * * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

st: Chou-Talalay method

Dear all,

Does anyone have or know of a Stata program to compute the various Chou-Talalay statistics of dose-effect relationship?

Thank you, Ricardo

Ricardo Ovaldia, MS Statistician Oklahoma City, OK

__________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com * * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

st: monte carlo study

Stata Users:

I am still learning my way around Stata after many years of using SAS. In SAS it is fairly easy to write a program to do a simple Monte Carlo study to illustrate the properties of least squares estimators.

Is there anyone that has a simple example of a Monte Carlo Study for OLS?

Thanks,

Rudy * * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

RE: st: functions for computing prob

Thanks a lot, Gary!

Lei

-----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Gary Longton Sent: Wednesday, March 22, 2006 12:47 PM To: statalist@hsphsun2.harvard.edu Subject: Re: st: functions for computing prob

It appears that I was mistaken about -tprob()-, that it does exist and appear to work, though undocumented and apparently obsolete. It looks to be the 2-tailed version of -ttail()-

. di tprob(30,1.8) .08192507

. di ttail(30,1.8) .04096253

- GL

Gary Longton wrote: > Lei Xuan wrote: > >> I am computing probabilities for t-test and z-test. >> I want to know if the functions -tprob- and -normprob- are out-of-date >> since no help files explain these functions. Are -ttail(n,t)- and >> -normal(z) >> >> right functions to compute probs? > > > see -help density functions- > or -help functions- > > Yes, the cumulative normal density function, normprob(), still works but > is out of date, no longer documented, and has been replaced by > normal(). Am not sure whether tprob() ever existed? - doesn't seem to > work and is not documented in any case. ttail(n,t) is documented under > help for functions. > > - Gary * * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

Re: st: functions for computing prob

It appears that I was mistaken about -tprob()-, that it does exist and appear to work, though undocumented and apparently obsolete. It looks to be the 2-tailed version of -ttail()-

. di tprob(30,1.8) .08192507

. di ttail(30,1.8) .04096253

- GL

Gary Longton wrote: > Lei Xuan wrote: > >> I am computing probabilities for t-test and z-test. >> I want to know if the functions -tprob- and -normprob- are out-of-date >> since no help files explain these functions. Are -ttail(n,t)- and >> -normal(z) >> >> right functions to compute probs? > > > see -help density functions- > or -help functions- > > Yes, the cumulative normal density function, normprob(), still works but > is out of date, no longer documented, and has been replaced by > normal(). Am not sure whether tprob() ever existed? - doesn't seem to > work and is not documented in any case. ttail(n,t) is documented under > help for functions. > > - Gary * * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

Re: st: functions for computing prob

Lei Xuan wrote:

> I am computing probabilities for t-test and z-test. > I want to know if the functions -tprob- and -normprob- are out-of-date > since no help files explain these functions. Are -ttail(n,t)- and -normal(z) > > right functions to compute probs?

see -help density functions- or -help functions-

Yes, the cumulative normal density function, normprob(), still works but is out of date, no longer documented, and has been replaced by normal(). Am not sure whether tprob() ever existed? - doesn't seem to work and is not documented in any case. ttail(n,t) is documented under help for functions.

- Gary

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

st: functions for computing prob

Hi,

I am computing probabilities for t-test and z-test. I want to know if the functions -tprob- and -normprob- are out-of-date since no help files explain these functions. Are -ttail(n,t)- and -normal(z)

right functions to compute probs?

Thanks,

Lei Xuan

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

Re: st: STATA syntax colouring in TextWrangler

Cool! Works w/ BBEdit, too. You just have to create a "Languages Module" folder.

See this as well: http://dataninja.wordpress.com/2006/03/03/send-to-stata-applescript- for-textwrangler/

-- Danielle H Ferry

On Mar 22, 2006, at 9:12 AM, Ronán Conroy wrote:

> On 22 Márta 2006, at 12:47, Taavi Lai wrote: > >> Google search "TextWrangler stata syntax" gave several hits and >> one of those directed to a file Stata.plist which downloaded ok. >> I'm not a Mac user and have no acquaintance to TextWrangler so you >> have to experiment further >> the link is http://dataninja.wordpress.com/2006/02/28/stata- >> language-module-for-textwrangler/ >> >> Regards, >> Taavi > > > Splended hunting. I've installed it and it works fine, except that > it fails to recognise the single open quote correctly. You need to > fix these lines > > <key>Open Strings 2</key> > <string></string> > > to read > <key>Open Strings 2</key> > <string>`</string> > > > > I have also made a couple of personal tweaks: it now recognises all > SSC packages. I've emailed this version to DataNinja so hopefully > it will be posted on his/her site. > > Ronán Conroy > rconroy@rcsi.ie > > > > > * > * For searches and help try: > * http://www.stata.com/support/faqs/res/findit.html > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

Re: st: GOOGLE ANSWERS FORWARDED

I could not agree more--in this case, it seems as though someone is charging for help that they are soliciting from a free source, and http://answers.google.com/answers/ says "Researchers are ready to answer your question for as little as $2.50 -- usually within 24 hours" which I suppose is why "It's pretty urgent."

This is at least as objectionable as students seeking help on graded problem sets.

On 3/22/06, n j cox <n.j.cox@durham.ac.uk> wrote: > What is this? > > I propose three simple principles: > > 1. If someone wants to join Statalist and post to it, they should > feel welcome, and they should read the FAQ to see how we operate. > Nothing new there. > > 2. If someone wants to post something to Statalist on behalf of someone > else, that's OK so long as they explicitly undertake to answer > subsidiary questions and forward the answers. > > 3. Otherwise I see zero point in answering questions like this, as > we have no assurances of an answer being seen, or "someone" or "Martin" > answering any questions it raises. The appeal to urgency is also > objectionable. > > Nick > n.j.cox@durham.ac.uk > > -------------------------------------------------------------------------------- > From: noah_kauffman@prusec.com > Subject: st: GOOGLE ANSWERS FORWARDED > Date: Wed, 22 Mar 2006 10:34:11 -0500 > > Someone @ google answers asks: > > > Ok, I'm trying to run a > fixed-effects panel in stata. > My regression: (where var1 is the > depend. variable) > > (PLEASE SEE DATA AT END) - Data > pasted from excel into stata via > editor > Stata commands were as follows: > > "tsset id year" > "gen var1 = ln(x3)" > "gen var2 = ln(var 1[_n-1])" ie. > require ln(x3 i,t-1) > "gen var3 = ln(x2)" > "gen var4 = ln(5 +x1)" > > xtreg var1 var2 var3 var4, i(id), fe > > Now for the coefficient on var2 I > get 0.177, and I'm looking for > something around the 0.5 region. > I've gone wrong somewhere - can > someone please help??? > It's pretty urgent as I need this by > Thursday morning at latest. If it > helps, I'm trying to estiamte the > model on page 1146 for OECD > countries, (but with my own data) in > Islam's paper: "Growth Empirics: > A Panel Data Approach" (1995) > I'm online all the time, so if > necessarty, I can clarify things > beforehand. > > <zap>

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

RE: st: Naming convention, Ideas?

Thank you Nick(s)! I hadn't thought of the colon approach or the major minor subcommand route. And one module certainly *is* easier to maintain, document, and support.

I am glad I posted for a query I thought might seem trivial.

pj

Nick Winter > Maybe something that works off the notion of "generalized." > > One option would be to use the prefix approach, to create a > syntax like: > > . genmanip : merge ... > > . genmanip : append ... > > and so on. Then you have only one .ado file to maintain, easily > allowing options that apply to your command (distinct from the > append, merge, etc. options), etc. > > See -help _on_colon_parse- for a Stata command that helps > parsing that syntax. > > I'm not sure -genmanip- is a great name, but something like that? > > --Nick Winter > > > > At 10:10 AM 3/22/2006, you wrote: > >I am looking into writing a suite of wrapper data management > >commands around merge, mmerge, append, joinby, and cross that can > >either take a stata data file, gzip compressed data file or simply a > >comma or tab delimited text file as the -using- argument, e.g. > ><cmd_name> using *.dta | *.dta.gz | *.dgz | *.txt | *.cvs [, * ]. > > > >Two questions: > >1) Any ideas w/ regard to a consistent naming convention that could > >be used? as I'd like to get it right the first time. I am not very > >fond of using an integer as a suffix a la cf2, cf3 for various > >reasons (e.g not very informative, unclear if integers imply > >incremental functionality, can conflict with others' names). So far > >I thought of: > > > >- mmergeplus, appendplus, joinbyplus (but rather long) > >- aappend, jjoinby, (but look like typos, besides mmerge > already exists) > > > >2) Would anyone find these useful, i.e. should they be posted on SSC?

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

Re: st: Mata function stata() within program

Many thanks for this. I will carefully look at it. It seems that this will make the program much faster, which would be _very_ important. The snippet will often run more than a million times ...

William Gould, Stata wrote: > Ulrich Kohler <kohler@wz-berlin.de> wrote, > > > I have a Mata function which looks as follows: > > > > -------------------------------------lsq.mata-- > > (...) > > // Mata Function to extract the substitution costs from subcost-matrix > > void showhash(real rowvector R) > > { > > string scalar key1 > > st_local("key1",key1) > > key1 = strofreal(R[1,2]) > > st_local("key1",key1) > > stata("local hash1 = mod(`key1',197)") > > } > > > > (...) > >------------------------------------------------ > > and Uli notes that when he runs it, he gets an error, > > > : R = 2,3,5,4 > > : showhash(R) > > > > invalid syntax > > stata(): 3598 Stata returned error > > showhash(): - function returned error > > <istmt>: - function returned error > > r(3598); > > Alan Riley <ariley@stata.com> has already given a solution, and suggested > the line > > stata("local hash1 = mod(`key1',197)") > > be changed to read > > stata("local hash1 = mod(\`key1',197)") > > Alan's right, but his solution is too tricky for me. Moreover, his > solution shows he is still thinking an ado mode rather than Mata mode. > > My suggested solution is > > stata("local hash1 = mod(" + key1 + ", 197)") > > and, with my solution, Uli's code can be simplified to read, > > > void showhash(real rowvector R) > { > string scalar key1 > > key1 = strofreal(R[1,2]) > stata("local hash1 = mod(" + key1 + ", 197)") > } > > or even > > void showhash(real rowvector R) > { > stata("local hash1 = mod(" + strofreal(R[1,2]) + ", > 197)") } > > Let me expound on the ado versus the Mata way of thinking. > > Uli wanted to run the Stata command > > local hash1 = mod(_______, 197) > > where he substituted the value from Mata matrix R[1,2] for ______. > Why Uli wanted to do this, we don't know, nor care. > > The ado way of thinking says we substitute a macro for _____, and arrange > for the macro to contain R[1,2], so when the macro is substituted by > Stata, we obtrain the desired result. Good way of thinking, when you > are writing an ado-file. > > The Mata way of thinking is more direct: we need to construct a string > > "local hash1 = mod(_______, 197)" > > where where R[1,2] is substitued for _____, and we can just use the > standard operators to do that, > > "local hash1 = mod(" + strofreal(R[1,2]) + ", 197)" > > Here's a good rule: It's perfectly okay to obtain input from macros, or > post output to macros. That is one way Mata can communicate with > ado-files. If you have to use macros to obtain your result, however, you > are thinking ado, not Mata. There's a simpler, more direct way. > > -- Bill > wgould@stata.com > * > * For searches and help try: > * http://www.stata.com/support/faqs/res/findit.html > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/

-- kohler@wz-berlin.de +49 (030) 25491-361 * * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

Re: st: GOOGLE ANSWERS FORWARDED

What is this?

I propose three simple principles:

1. If someone wants to join Statalist and post to it, they should feel welcome, and they should read the FAQ to see how we operate. Nothing new there.

2. If someone wants to post something to Statalist on behalf of someone else, that's OK so long as they explicitly undertake to answer subsidiary questions and forward the answers.

3. Otherwise I see zero point in answering questions like this, as we have no assurances of an answer being seen, or "someone" or "Martin" answering any questions it raises. The appeal to urgency is also objectionable.

Nick n.j.cox@durham.ac.uk

-------------------------------------------------------------------------------- From: noah_kauffman@prusec.com Subject: st: GOOGLE ANSWERS FORWARDED Date: Wed, 22 Mar 2006 10:34:11 -0500

Someone @ google answers asks:

Ok, I'm trying to run a fixed-effects panel in stata. My regression: (where var1 is the depend. variable)

(PLEASE SEE DATA AT END) - Data pasted from excel into stata via editor Stata commands were as follows:

"tsset id year" "gen var1 = ln(x3)" "gen var2 = ln(var 1[_n-1])" ie. require ln(x3 i,t-1) "gen var3 = ln(x2)" "gen var4 = ln(5 +x1)"

xtreg var1 var2 var3 var4, i(id), fe

Now for the coefficient on var2 I get 0.177, and I'm looking for something around the 0.5 region. I've gone wrong somewhere - can someone please help??? It's pretty urgent as I need this by Thursday morning at latest. If it helps, I'm trying to estiamte the model on page 1146 for OECD countries, (but with my own data) in Islam's paper: "Growth Empirics: A Panel Data Approach" (1995) I'm online all the time, so if necessarty, I can clarify things beforehand.

<zap>

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

st: GOOGLE ANSWERS FORWARDED

Someone @ google answers asks:

Ok, I'm trying to run a fixed-effects panel in stata. My regression: (where var1 is the depend. variable)

(PLEASE SEE DATA AT END) - Data pasted from excel into stata via editor Stata commands were as follows:

"tsset id year" "gen var1 = ln(x3)" "gen var2 = ln(var 1[_n-1])" ie. require ln(x3 i,t-1) "gen var3 = ln(x2)" "gen var4 = ln(5 +x1)"

xtreg var1 var2 var3 var4, i(id), fe

Now for the coefficient on var2 I get 0.177, and I'm looking for something around the 0.5 region. I've gone wrong somewhere - can someone please help??? It's pretty urgent as I need this by Thursday morning at latest. If it helps, I'm trying to estiamte the model on page 1146 for OECD countries, (but with my own data) in Islam's paper: "Growth Empirics: A Panel Data Approach" (1995) I'm online all the time, so if necessarty, I can clarify things beforehand.

Thanks for your help,

Martin id Year x1 x2 x3 69 1980 1.662461763 14.98616639 17366.351 69 1985 1.751607907 11.13300528 16621.06196 69 1990 1.687373049 9.304076135 27174.53416 69 1995 1.145010297 11.69968344 30926.21307 69 2000 1.287333281 13.37070116 30192.66929 70 1980 0.822325571 19.58621955 16393.97505 70 1985 0.810340975 15.2813291 13144.98494 70 1990 0.524329844 13.18757172 30948.48975 70 1995 0.535188192 16.75931281 43827.01387 70 2000 0.210425117 15.7938949 35069.97475 71 1980 0.602471123 21.51876931 18890.07907 71 1985 0.450791288 18.81103302 12464.20844 71 1990 0.109973609 17.44272923 29622.43449 71 1995 0.188184945 16.84773807 41116.6454 71 2000 0.143463565 16.66518245 33698.04446 72 1980 1.793759025 13.47103394 15883.10011 72 1985 1.13896003 10.38388336 19731.97361 72 1990 1.329878954 10.02094259 30365.61038 72 1995 1.011132436 10.51075658 29259.80686 72 2000 1.090468632 12.83921854 33940.05624 73 1980 0.477478563 15.80453557 20635.75266 73 1985 0.474309767 14.65851051 17595.00342 73 1990 0.33966525 10.00934545 38504.07008 73 1995 0.395259882 11.23964728 51179.45169 73 2000 0.16514965 7.732210879 44379.18977 74 1980 0.423406049 20.10184597 16151.36943 74 1985 0.533425877 17.03576594 16286.31437 74 1990 0.17987329 15.98985483 40788.69617 74 1995 0.318379132 14.12707098 38004.6729 74 2000 0.300608318 13.51151556 34586.6218 75 1980 0.893883928 22.8138795 19857.92277 75 1985 0.961587541 19.68265195 14615.15092 75 1990 0.522867504 19.65530273 32582.27016 75 1995 0.285397244 15.93692676 41063.60444 75 2000 0.236410263 16.32833599 34134.14132 76 1980 1.32316342 20.73041017 7904.237485 76 1985 0.922498984 19.56876374 6319.746039 76 1990 0.995140916 21.41446384 12340.93113 76 1995 0.903410832 16.741426 16436.26653 76 2000 0.442745708 17.34889112 15296.49573 77 1980 1.6190352 14.66842754 10467.50384 77 1985 0.984051537 13.11345854 9644.005897 77 1990 0.52640211 13.47451789 22004.14509 77 1995 1.573243946 15.01458907 28639.96061 77 2000 2.010459298 16.43950604 37062.08373 78 1980 0.68755784 18.83731856 12316.87681 78 1985 0.85343638 17.14138777 11127.02382 78 1990 0.326057221 19.51414315 28247.96617 78 1995 0.200326573 21.82167717 27807.58652 78 2000 -0.29931545 17.18042319 27566.33909 79 1980 0.801608658 22.79975178 13501.75246 79 1985 0.915264718 18.50416861 16474.29537 79 1990 0.723636851 19.17730187 35344.51147 79 1995 0.19249092 22.90417495 60613.2475 79 2000 -0.145385709 19.08132976 54827.36543 80 1980 1.387616421 14.94607015 19026.16294 80 1985 1.025591315 11.06662356 13313.43306 80 1990 0.75005838 9.991219054 28602.29705 80 1995 0.432385881 15.01848763 39256.06623 80 2000 0.500530203 16.40487928 34288.62992 81 1980 0.828691467 20.71323278 11601.11144 81 1985 1.311964138 20.45743036 10716.73514 81 1990 1.3691672 18.28935167 19337.00316 81 1995 1.352106185 19.59779156 25288.2837 81 2000 0.900292987 20.21200377 20582.89207 82 1980 0.587584723 15.61925349 24668.24692 82 1985 0.682825353 11.2998007 23825.56852 82 1990 0.543734919 9.963804811 42244.72488 82 1995 0.525269517 11.79609955 52537.70169 82 2000 0.674729984 16.59774348 57349.30306 83 1980 1.462021324 19.39049446 4808.80038 83 1985 0.692344139 10.83176142 3804.655031 83 1990 0.24932822 14.84951396 10872.80679 83 1995 0.165472127 18.03693854 16241.68266 83 2000 1.032193538 18.57355586 15384.17332 84 1980 1.224982892 17.50583952 9429.244045 84 1985 1.067391322 14.1583388 6892.579016 84 1990 0.755676566 12.92168201 19659.18951 84 1995 0.591949849 11.73567215 21807.56001 84 2000 0.647429503 10.58827526 20354.88874 85 1980 0.255875886 18.00839687 24331.0159 85 1985 0.220629595 16.16479109 19377.70035 85 1990 0.47660532 15.96377273 43624.07581 85 1995 0.412345421 15.32120281 44019.86808 85 2000 0.273147601 14.41038065 41946.34968 86 1980 0.496489298 15.48425823 26419.38091 86 1985 0.941437445 13.88066924 22534.61193 86 1990 0.952530759 14.10603859 51114.72784 86 1995 0.641787638 14.00834142 65537.35073 86 2000 0.224554799 17.41319943 50647.46892 87 1980 2.612506141 17.82209964 2843.972229 87 1985 3.50641537 15.35851228 2251.386322 87 1990 2.741307328 16.79774136 4417.057814 87 1995 2.819981075 18.63495028 4297.528683 87 2000 2.376735812 13.4780134 4493.247857 88 1980 0.484872291 23.73434618 14863.17762 88 1985 0.544248613 21.78829843 12241.47715 88 1990 0.171420325 23.40476948 26392.5887 88 1995 0.177399289 17.48076441 30017.22473 88 2000 0.2890547 19.05241986 37523.02325 89 1980 1.510727766 16.39963029 18387.85317 89 1985 0.907081959 12.01908341 26494.88105 89 1990 0.845319833 12.206305 35104.42998 89 1995 1.224515978 14.20890757 42224.98562 89 2000 1.382534638 12.96683686 52391.32554

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

Re: st: Naming convention, Ideas?

Maybe something that works off the notion of "generalized."

One option would be to use the prefix approach, to create a syntax like:

. genmanip : merge ...

. genmanip : append ...

and so on. Then you have only one .ado file to maintain, easily allowing options that apply to your command (distinct from the append, merge, etc. options), etc.

See -help _on_colon_parse- for a Stata command that helps parsing that syntax.

I'm not sure -genmanip- is a great name, but something like that?

--Nick Winter

At 10:10 AM 3/22/2006, you wrote: >I am looking into writing a suite of wrapper data management >commands around merge, mmerge, append, joinby, and cross that can >either take a stata data file, gzip compressed data file or simply a >comma or tab delimited text file as the -using- argument, e.g. ><cmd_name> using *.dta | *.dta.gz | *.dgz | *.txt | *.cvs [, * ]. > >Two questions: >1) Any ideas w/ regard to a consistent naming convention that could >be used? as I'd like to get it right the first time. I am not very >fond of using an integer as a suffix a la cf2, cf3 for various >reasons (e.g not very informative, unclear if integers imply >incremental functionality, can conflict with others' names). So far >I thought of: > >- mmergeplus, appendplus, joinbyplus (but rather long) >- aappend, jjoinby, (but look like typos, besides mmerge already exists) > >2) Would anyone find these useful, i.e. should they be posted on SSC? > > >Patrick Joly > >* >* For searches and help try: >* http://www.stata.com/support/faqs/res/findit.html >* http://www.stata.com/support/statalist/faq >* http://www.ats.ucla.edu/stat/stata/

________________________________________________________ Nicholas J. G. Winter 607.255.8819 t Assistant Professor 607.255.4530 f Department of Government nw53@cornell.edu e Cornell University falcon.arts.cornell.edu/nw53 w 308 White Hall Ithaca, NY 14853-4601

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

Re: st: Naming convention, Ideas?

My suggestion is none of these. Find a suitable name, say -pjcombine-, and then write a command with subcommands, so that your syntax is

pjcombine append <whatever> pjcombine merge <whatever> ...

Reasoning:

0. There are many precedents in Stata itself.

1. One program is easier to keep track of than several.

2. -combine- is a good word, but StataCorp lay claim to all the words in the English language. Some of us have forgotten that in the past -- "accidentally on purpose", perhaps -- and you might choose to forget that too. At worst, StataCorp may grab "your" program name and either your program has been superseded, or you need to change the name.

3. -pj- would be both modest and good PR.

Put them on SSC!

P.S. on integers: there are two conventions in use. Typically low integers (esp. 2, 3) mean versions of the command; high integers (esp. 5, 6, 7, 8) mean versions of Stata it works with. In practice that is less confusing than it seems, as although many programs go through much revision, programmers don't change names that often.

If <program><n> means "works with Stata <n>", then I suggest that should always be explicit in its help.

Nick n.j.cox@durham.ac.uk

Joly.Patrick

I am looking into writing a suite of wrapper data management commands around merge, mmerge, append, joinby, and cross that can either take a stata data file, gzip compressed data file or simply a comma or tab delimited text file as the -using- argument, e.g. <cmd_name> using *.dta | *.dta.gz | *.dgz | *.txt | *.cvs [, * ].

Two questions: 1) Any ideas w/ regard to a consistent naming convention that could be used? as I'd like to get it right the first time. I am not very fond of using an integer as a suffix a la cf2, cf3 for various reasons (e.g not very informative, unclear if integers imply incremental functionality, can conflict with others' names). So far I thought of:

- mmergeplus, appendplus, joinbyplus (but rather long) - aappend, jjoinby, (but look like typos, besides mmerge already exists)

2) Would anyone find these useful, i.e. should they be posted on SSC?

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

st: RE: left-truncation of entry in survival analysis

I am assuming that the covariates in the 2 models are all time-independent. In general, the 2 models are not equivalent. However, it is entirely possible that they might give the same parameter estimates, in at least some specific sets of data.

Survival analyses (as their name suggests) are based on events whereby one subject is observed to survive another, ie one subject at risk on a particular day survives to the end of that day and another subject at risk on the same day is dead by the end of that day. The two models are different in what is meant by "the same day" for the two subjects. In the first model, we compare the fate of Subject A on Day X of the life of Subject A with the fate of Subject B on Day X of the life of Subject B, for all pairs of Subjects A and B who were both under observation in the study on Day X of their respective lives. In the second model, we compare the fate of Subject A on Day Y of Subject A's study time (measured from Subject A's entry into the study) with the fate of Subject B on Day Y of Subject B's study time (measured from Subject B's entry into the study), for all pairs of Subjects A and B who were both under observation on Day Y of their respective study time windows.

In a specific study, it might be the case that, for each Subject A who died on Day X of his/her life and Day Y of his/her study time, the set of Subjects B who survived through the Days X of their respective lives in the study might be the same set as the set of Subjects B who survived through the Days Y of their respective study times in the study. This might especially be the case if the number of subjects is small and/or deaths in the study are sparse. For such a specific study, the two Cox regressions will give the same parameter estimates. However, this will not be the case for all studies. For instance, in some studies, there will be pairs of Subjects A and B, such that Subject A dies in the study at 100 years of age after having entered the study at 99 years of age, whereas Subject B dies in the study at 40 years of age after having entered the study at 30 years of age. In this case, the first model will assume that neither patient was observed to survive the other, whereas the second model will assume that Subject B has survived Subject A, even though Subject B died younger.

I hope this helps.

Roger

Roger Newson Lecturer in Medical Statistics POSTAL ADDRESS: Respiratory Epidemiology and Public Health Group National Heart and Lung Institute at Imperial College London St Mary's Campus Norfolk Place London W2 1PG STREET ADDRESS: Respiratory Epidemiology and Public Health Group National Heart and Lung Institute at Imperial College London 47 Praed Street Paddington London W1 1NR TELEPHONE: (+44) 020 7594 0939 FAX: (+44) 020 7594 0942 EMAIL: r.newson@imperial.ac.uk WEBSITE: http://www.imperial.ac.uk/nhli/r.newson/ Opinions expressed are those of the author, not of the institution.

-----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Sue Chinn Sent: 22 March 2006 12:47 To: statalist@hsphsun2.harvard.edu Subject: st: left-truncation of entry in survival analysis

Dear Statalist readers,

Reports of survival analysis which use age as the time scale rather than

time-on-study often 'adjust for delayed entry'. In Stata this is achieved by:

stset age, fail(died) enter(ageatentry)

(see recent e-mail from Dawn Teele, or reply to st: streg from rgutierrez@stata.com on 19th September 2002.)

However, a model fitted with the above stset gives exactly the same answer as one with

stset timeonstudy, fail(died)

provided timeonstudy=age-ageatentry (as it normally would, but might not

exactly depending how variables were calculated) and the models are exactly the same. In the second model it is usual to adjust or stratify on age, while in the first it isn't as age is taken into account, supposedly, so

researchers may not have realised the equivalence.

So, am I missing something, or are advocates of the first model deluding

themselves? Can left truncation be ignored with age as the timescale?

Thanks

Sue

Sue Chinn Professor of Medical Statistics Division of Asthma, Allergy and Lung Biology King's College London 5th Floor Capital House 42 Weston Street London SE1 3QD

tel no. 020 7848 6607 fax no. 020 7848 6605

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/

* * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/


Tag:

This page is powered by Blogger. Isn't yours?