Results of practical research in course work


Classification of research results

Let's consider several options by which we can divide the results:

  • intentionality (planability): unplanned or planned;
  • their quality: negative or positive;
  • recording time: current, intermediate and final;
  • their degree of significance: insignificant or significant;
  • degree of dependence on the subject: indirect or direct, indirect and direct;
  • compliance with goals: partial or full compliance with goals and objectives.

Correlate your own results with all the specified classifications.

Let's turn to poetry

It is difficult to say what is more difficult: interpreting a poetic text or working with prose. A feature of the literary language is the polysemy of words, which significantly complicates understanding: the same concept can be interpreted in completely different ways, especially if it is a word that has changed its lexical meaning over time, for example, “C student” in the modern understanding is a student, getting not the best grades, whereas in the texts of the nineteenth and early twentieth centuries we will talk about a coachman who drives three horses.

Another problem in interpreting a poetic text is tropes. Allegories, metaphors and epithets, which are not always understandable to the common man, become a real disaster, especially for a modern schoolchild, to whom many concepts of classical literature are alien. In addition, people perceive phenomena differently, so it is impossible to say with absolute certainty that the interpretation of a poetic text will be correct given the possibility of individual interpretation of concepts.

Conclusions and results of practical research: what is the difference

Do not confuse the results of the study with its conclusions. The first are objective and are located in the practical part, while the second are subjective and are located in the final part.

Results are indicators, facts and values ​​recorded by the author during the study. Conclusions are his own judgments and reflections, an attempt to independently explain the results.

The results can greatly influence the conclusions and change them radically. As an example of this statement, we will give one of the fragments of the coursework, through which you can see how the results in the form of studying the articles influenced the final conclusions.

Semi-quantitative analysis

A semi-quantitative method for studying antibody levels is more informative compared to a qualitative one, because makes it possible to determine not only their presence in a blood sample, but also to determine their relative concentration (positivity coefficient, s/c index), and therefore it is also called a highly sensitive qualitative method.

Positivity Rate and S/C Index

The result of a semi-quantitative analysis does not contain figures for the absolute amount of antibodies in CFU/ml, as in the quantitative method, but only a certain positivity coefficient (CP) or S/C index , which show the relative content of antibodies in a blood sample. And although commercial laboratories claim that semi-quantitative analysis cannot be used to compare tests over time, there is still a correlation between the positivity rate/S/C index and the amount of antibodies in the blood.

Specialist's workshop, No. 8, 2021.

In the absence of the opportunity to donate blood for the quantitative determination of immunoglobulins M/G, semi-quantitative results can still be used to indirectly judge the dynamics of changes in antibody titer. For example , if the first result at the beginning of the disease showed CP = 7.25, and a week later CP = 16.15, then the immunoglobulin titer has definitely increased. And although the exact numbers of the number of antibodies are unknown, the dynamics of changes in their titer can be traced.

Difference between KP and S/C

In fact, these are indicators of the same thing - the level of antibodies, but on different equipment, to different antigens and with a different table of reference values.

Positivity rate:

  1. Equipment: automated complex RealBest, Vector-Best, Russia, test system [5501] SARS-CoV-2-IgG-ELISA-BEST
  2. Determines IgM IgG level for spike protein (S)
  3. Positive result (reference values): CP>1.1

S/C index:

  1. Equipment Architect i2000SR, Abbott Diagnostics, USA, SARS-CoV-2 IgG test system. Reagents for ARCHITECT - SARS-CoV-2 IgG Reagent Kit
  2. Determines IgM IgG level for nucleocapsid protein (N)
  3. Positive result (reference values): S/C>1.4

Reference values

  1. CP<0.8 or S/C<1.0 - negative result
  2. 0.8<KP<1.1 or 1.0<1.4 - questionable result
  3. KP>1.1 or S/C>1.4 - positive result

Decoding the results

Negative result:

  1. Complete absence of antibodies (no contact or incubation period after infection)
  2. Very low level of antibodies (prodromal period of the disease, the first half of the height of the disease)

A questionable result means a low level of antibodies, which occurs:

  1. At the beginning of the disease
  2. Long after recovery

A positive result means a sufficiently high level of antibodies, which happens:

  1. At the height of the disease
  2. During the recovery period
  3. After an illness

How to analyze results

Two conditions that must be met when reporting results:

  • compare independently obtained results with those already available in scientific sources;
  • mention not only final results, but also intermediate ones.

Analysis methods are varied and their choice directly depends on the discipline within which the research is being conducted:

  • statistical methods;
  • factor, variance and correlation analysis, facilitating better testing of hypotheses;
  • correlations found between the studied variables and their description.

Entire programs are also used for analysis: for example, Statistica, SPSS or Vortex, which help process large amounts of massive field research data. But for student papers such as coursework or dissertations, Excel will be sufficient.

Interpretation in literature

In literature, interpretation is relative. This is explained by the fact that each reader interprets the text written by the author in his own way. It may turn out that the author conveyed one meaning of what was said, and the reader interprets the text in a completely different way. That is why the interpretation of a literary text is diverse and relative, which largely depends on how the reader himself reads and perceives it.

In different centuries, authors used special styles of writing texts, which were more focused on the category of people who lived in a particular era. So, in ancient times, metaphors and allegories were good. In the Middle Ages, it was preferred to resort to prose and poetry.

Interpretation is an individual approach to the phenomenon that is perceived by a person, be it a text, a phenomenon, an object or another individual. That is why in the eyes of one person a particular individual can be good, but in the eyes of another - bad. At the same time, people look at the same individual, who does not change in his manifestations.

Practical significance of the results of practical research

The introduction outlines both the theoretical and practical significance of the study. But now we will consider the second, which is the answer to the question of how the results obtained can be used in practice.

It is logical that a practical degree increases the very success of the work and its value. Describe it as clearly, understandably, but at the same time succinctly. Be sure to indicate the industry for which the results will be particularly relevant, and also provide compelling evidence and arguments demonstrating the benefits of the results of your specific research.

One of the effective and popular ways to do this is a comparative analysis of the situation in the “before and after” mode of applying your methods.

Recommendations and an algorithm for analyzing the results of a clinical study obtained during statistical data processing are presented in order to confirm the statistical significance of the effect and identify possible bias errors and confounders.

Interpretation of the obtained results of statistical data processing is one of the most important, exciting and mysterious stages of clinical research. There is no place for routine, which is usually present during the collection or processing of data, this is a time of insights and guesses, the birth of new knowledge or confirmation of ideas developed from practical experience and awaiting objective verification. This process is complex and creative, requiring, on the one hand, logical rigor, broad erudition and fluency in the profession, and on the other, sufficient courage to move away from the generally accepted standard position in order to see something new and significant. There are no recipes for successfully conducting this stage of research; there are only tips for beginners: an experienced researcher, as a rule, already has his own techniques and secrets, honed by personal experience. This may be one of the reasons that courses on the theory and practice of clinical research pay significantly less attention to the interpretation of results than to the development of study design or methods of statistical data analysis. This is the case when the share of attention is not adequate to the degree of importance and complexity of the process.

Today we will talk specifically about the tasks and problems of interpreting the results of a clinical trial. Although it was noted above that there are no ready-made recipes for solving problems at this stage, there is still a certain scheme, almost an algorithm, following which you can deeply and comprehensively analyze the results and obtain strong arguments in support of your ideas and assumptions. In this article, we will review studies that seek to detect and/or confirm the presence of some effect or relationship between an exposure and an outcome. What do we see as a result of any such study, regardless of its design? This is a certain number (correlation coefficient, relative risk, etc.) that expresses the strength or degree of statistical association between an exposure factor (or risk in observational studies) and an outcome (syndrome, severity of disease, etc.), which is subject of our research interest. This obtained number is so far nothing more than a formally calculated metric of the associative connection between the factor of interest to us and the outcome, which, according to our assumptions, should be somehow connected. However, what do we really want to know when we conduct our research? The metric itself, the number, is of little interest to us. In fact, we need to decide whether the resulting associative relationship is cause-and-effect, that is, whether such a connection exists objectively, in nature, between the influencing factor and the outcome. If so, then any change in this factor should cause a change in the risk of developing this outcome. The general goal of analyzing the results obtained is to identify such “real” associations of factors with the event under study (outcome) and to filter out associations that were obtained by chance due to data variability, incorrect design, or incorrect application of statistical processing methods of measurements.

To find grains of gold, you need to thoroughly wash the gold-bearing rock through several filters. In much the same way, it is necessary to pass the obtained results through three main analytical filters in order to obtain reliable dependencies and connections between the factors and events being studied. These three filters make up the following cascade of questions and tests:

1. Is the resulting association between a factor and an event a valid (reliable) statistical association or is it due to chance? To answer this question it is necessary to estimate the probability of: • random error; • offset errors; • interference of confounders (third-party or unaccounted influences).

2. Is the resulting association explicable?

To answer this question, a set of criteria (positive criteria) is used, which allows us to make a verdict regarding the explicable possibility (plausibility) of such a connection between the factor and the outcome, assessing: • the strength of the resulting association; • consistency with other studies; • biological plausibility (persuasiveness); • dose-dependent effect of exposure (when studying drugs or procedures that can be dosed).

3. Is it possible to expand the scope of application of the obtained result beyond the target population (generalizability)? This property is otherwise called “external validity” of a clinical trial. The resolution of this issue is entirely within the competence of the expert conducting the analysis of the research results, but we will discuss the basic principles of this analysis below.

This entire three-stage scheme for analyzing the results obtained ultimately allows us to answer one important question: can we confidently say that it is the influencing factor we have studied that determines the change in the outcome under study? Is there some alternative explanation for our research findings?

So, let's look at each of the three main issues in more detail.

1. Validity of statistical association

Random error

First of all, you need to pay attention to the sample size. The smaller it is, the greater the likelihood of being misled simply due to the probabilistic nature of measurements. This issue has been discussed repeatedly in our articles [1]. Since in research we can only estimate the strength of the connection, and not obtain its exact value (after all, most of the population remains outside of our measurements), then, like any assessment made under conditions of incomplete information (a certain degree of uncertainty), our result will be random error. This error is called random not because we do not expect it [2], on the contrary, we are well aware of it. It is called random because it is due to the stochastic (probabilistic) nature of the data we are studying. Such an error depends on the variability of the indicators, the relationship between which is being studied, as well as on the sample size [1]. It's very simple: the greater the variability, the larger the sample size required to achieve an acceptable random error. But this is all in theory, but what to do when we already have the effect size calculated from our data with its specific random error? We need to take three sequential steps:

• Estimate effect size. If you receive a small value for the correlation coefficient (or any other statistical parameter reflecting the strength of the connection), this is a reason to think about how real and significant this connection is, even if it meets all the formal requirements of objectivity.

• Test the hypothesis about its statistical significance (significant difference from the null effect). As a result of this test, you will receive an error value p, which will allow you to estimate the probability of a false decision about the presence of a connection (or the effect of an influencing factor), whereas in reality there is no such connection, that is, the factor does not affect the outcome in any way. Typically, the threshold level for making such a decision is p = 0.05 (significance level), but this is just a tribute to tradition; each researcher can set his own significance level, more stringent or softer.

• Determine the accuracy of the obtained estimate of the effect under study. The accuracy of the estimate is determined by the interval that, with a given probability, includes the real value of the effect under study; this is nothing more than a confidence interval, most often 95%, but, like the level of significance, its probability can be varied in accordance with the conditions of a particular study.

Paying attention to these three positions when analyzing the obtained correlation coefficients will allow you to exclude dubious effects and leave only those in defense of which you can bring the entire arsenal of statistical arguments.

Each study must have a so-called primary or main goal (primarygoal). This is the main question that the study must answer. A design is built around it, end points are determined and suitable methods for statistical data analysis are selected. However, it would be very wasteful to spend a lot of time and effort to get one four-field plate and leave it at that. As a rule, the study includes secondary goals, which are considered based on the capabilities of a given design. Usually this is a more in-depth study of the relationship (effect) between a factor and an outcome within subgroups created by stratifying the entire sample according to some characteristics, for example, dividing it by gender or age of patients, severity of concomitant pathology, etc. Study of changes in the effect in depending on gender, age or other characteristics is called subgroup analysis [3] or effect modification (association) analysis. Indeed, quite often the effect value found on a general sample is transformed when it is calculated within subgroups of the same sample, formed by gender, age, geography or other characteristics. However, when conducting such an analysis, it is necessary to keep in mind one subtlety. If the study of effect modification in subgroups was planned at the design development stage, and not only a decision was made about such an analysis, but specific hypotheses were formulated about under what characteristics the effect might change, then this situation is called hypothesis testing. If the idea to analyze effect modification in subgroups arose after the data had been collected and processed in the overall sample, you find yourself in the situation of formulating hypotheses. It’s not just a matter of different names, the situations themselves are radically different and force us to perceive the results of subgroup analysis differently. The point is that if you have a hypothesis before you receive data that the relationship between a factor and an outcome may be different, for example in women and men, in young and old patients, you prepare your data with the intention of testing these hypotheses. Typically, there are not many such preliminary considerations, even if there are a large number of categorical variables in your study. Thus, the number of tests of additional hypotheses will be small and the achieved statistical significance at the traditional significance level of 0.05 will be an objective argument in favor of your assumption made before the start of data collection.

However, another situation is common, when a kind of scanning of the modification of the studied relationship occurs across all possible subgroups of patients in order to detect a statistically significant difference in some variant of stratification, without formulating any intelligible assumptions about the behavior of the effect and, moreover, after how the data has already been received. This process in English-language literature is called a fishing expedition - a fishing trip. In fact, the term is chosen very aptly; it clearly reflects the capabilities and credibility of the results of such research. Like fishing, when scanning the effect across all possible subgroups, we do not know in advance what we expect, and we can get both a large fish and an empty tin can, but the problem is that, unlike fishing, we, unfortunately, cannot reliably identify what we actually caught, even if we obtain statistical significance of the difference. Why? Everything is very simple, just remember what our threshold (significance level) of 0.05 means [4]. Let's remember: it means that out of 100 comparisons that showed significant differences between subgroups, 5 will be erroneous. The theory does not allow us to determine which exactly 5 significant differences obtained are misleading us. The between-group difference hypothesis, set at 0.05, will protect us from making bad decisions only if we test pre-formulated assumptions and their number is small. Usually there are few such pre-formulated hypotheses, much less than a hundred and even hardly a dozen. However, when scanning the effect across all possible variables and features included in the study, the number of such comparisons becomes prohibitive, so we have a high probability of getting a mixture of truly existing differences and those that occurred by chance, which is how our sample turned out, and in another study, in another sample, these differences will not be statistically significant, and may even be completely absent. This does not mean that such scanning of the effect across all subgroups is useless. Quite the contrary, it is of great importance, especially in pilot observational studies, as it allows you to make new assumptions, formulate hypotheses, which can then be tested in an experiment or observation designed specifically for these purposes. The point is that such results should be treated with great caution, even if you have obtained statistical significance, since in a “fishing trip” situation, statistical methods do not guarantee the level of significance for your comparisons that you established. To briefly summarize what has been said above about subgroup effect analysis, the interpretation of hypothesis testing results in “hypothesis testing” and “hypothesis formulation” situations should be quite different.

So, the first stage of interpretation of the results is completed, you are convinced that the obtained correlation coefficient is statistically significant, that is, the presence of the effect cannot be explained by chance. However, it is too early to say that a connection really exists. It is necessary to analyze possible bias errors and exclude the presence of unaccounted confounders and third-party interfering factors.

2. Bias error Bias error systematically underestimates or overestimates the value of the coupling coefficient (effect); in some situations it can generate a coupling where there is none, or, conversely, hide its existence. Bias in the estimate of effect relative to its actual value is introduced by any source of systematic error in a clinical trial (Fig. 1). The bias error is insidious and quite often fatal. Unlike a random error, which is always present in the statistical assessment of any parameter or relationship coefficient, and there are specially developed methods for its calculation and reduction, the bias error distorts reality, is misleading, and if it is admitted, then there are no methods only to correct it, but even to at least evaluate it.

A random error never violates the real picture and the most it can do is blur it so that the details are poorly visible, a displacement error is like a subversive activity of a saboteur: it is not known where, when and what damage it will cause, but the consequences can be very serious and, What’s most unpleasant is that it is usually impossible to correct or at least reduce the value of this error. All this should set you up to carefully inspect your own and other studies for the possibility of such an error.

Random error is present in any study, since it deals with variables of a probabilistic nature. In the absence of bias error, all measurements lie equally likely on all sides of the real value, so the estimate will be closer to it, the larger the sample. The magnitude of a random error can always be estimated and reliably calculated by a confidence interval, which with a given probability will include the real value of the effect. If something in the study is done incorrectly, then in addition to a random error, a bias error occurs, a systemic shift of all measurements relative to the real value in one direction.

An estimate calculated in the presence of a bias error will underestimate or overestimate the value of the effect, and the confidence interval will not cover its real value. The main points that must always be taken into account in order to minimize the likelihood of this serious error.

The source of bias error can be miscalculations: • at the stage of developing the design and including patients in the study (selection bias); • at the stage of observation, registration and data collection (observation bias); • at the stage of primary data processing (performance bias); • and even at the stage of interpreting the results in terms of the field of study. If a researcher is an ardent supporter of one of the hypotheses being tested, he will unwittingly try to isolate from the results obtained as many arguments as possible in favor of his preference [4].

Sources of Bias in Study Enrollment

To avoid or at least minimize the likelihood of bias errors at the stage of creating a patient sample, you must always keep in mind one important idea: the sample must be representative, that is, represent the properties of the study population in full and not distort their balance characteristic of this population . If the number of men and women in the population is 1:1, then this parity should be approximately maintained in the sample. If there are 2 times more young people in a given population than older people, then there should be approximately 2 times as many young patients in the sample. This simple question is actually very complex and critical. For each study, it must be worked out as thoroughly as possible, preferably together with a data processing specialist, because errors made at this stage cannot be corrected by any subsequent tricks.

Randomization is the best way to avoid bias when recruiting a study sample, although it is not 100% guaranteed against bias. However, randomization is only possible in clinical trials, where the investigator has an active role in assigning a particular treatment to a patient and recording the outcome of that treatment. In observational studies, where we have only a passive role as an observer of the influence of a factor on patients, randomization is impossible. Various sampling techniques have been developed for such clinical studies to reduce the risk of bias.

For example, if it is not possible to preserve the properties of the population without obvious distortions when including patients in the study sample, then they try to recruit two compared samples in such a way that they are statistically homogeneous, that is, so that there is no statistically significant difference between them in the most important characteristics of patients, capable of influencing the result of the influence factor being studied. An important rule when selecting a sample for observational studies is to eliminate, to the maximum extent possible, the participation of any member of the research team in the decision to include the next patient in the sample. This is not always fully achievable, but we must strive for it. The fact is that a doctor interested in promoting an idea will unwittingly strive to recruit “interesting” patients who, for example, have had more studies done or have a more pronounced pathology that is affected by the treatment method being studied. “Boring” patients will be eliminated as not being of research interest. This phenomenon is far from falsifying facts; it occurs against the will of the researcher, no matter how much he tries to control himself, this is how a person works. This fact has been studied and noted in many works on research methodology, so it is preferable to recruit a doctor who has nothing to do with it to recruit patients into the study.

Sources of bias errors at the stage of observation, registration and collection of primary data processing

The main source of bias errors during the data collection phase is violation of the study protocol. Strict adherence to the protocol ensures the same implementation of the exposure methodology, observation time, completeness of recording of results and measurements. In short: the slightest deviation from the protocol results in patients in different groups being monitored differently, which systematically biases their data. The word “different” is key when inspecting study results for bias errors, since it is the different protocol conditions for one and the second group that lead to a systemic shift in the data. A second pitfall at this stage can be observer or patient bias, especially if the end points are questionnaire results or scores on various scales.

It has been proven that patients with more severe pathology are more attentive to recording various sensations in themselves and tend to describe their condition to the doctor in more detail than those who have a milder form of the same disease. Likewise, a doctor, under the influence of his own professional beliefs, may unwittingly distort the information received from the patient. To eliminate such system errors, the technique of masking information from the observer about which treatment method (or exposure/risk factor) was used was used. In the English-language literature, such studies are characterized as blind.

Masking may be subject to not only the observer recording patient data, but also the patients themselves (of course, this is not feasible in all cases), as well as a specialist in statistical data processing, and even the one who interprets the results of the study. Depending on the number of masked stages, studies can be single-blind, double- or triple-blind. In fact, the greater the separation of different stages between specialists, the less the research conclusions are exposed to the danger of systemic bias, but this increases the cost and duration of scientific work, in addition, not every stage can be masked. When resolving this issue, you must use common sense so as not to go too far.

3. Confounders (third-party interfering factors) Any obtained statistically significant relationship between an influencing factor and an outcome must be carefully and critically analyzed in order to exclude the intervention of an unaccounted for (and sometimes unknown) third-party player - a confounder. In Fig. Figure 2 schematically shows how the confounder covertly interacts with both studied features and “contaminates” their connection with his interference.

The influence of a confounder on both variables may be so large that it may be partially or entirely responsible for the association we observe between the factor and the outcome in our study. It is no less insidious than the bias error, since it also leads to false conclusions about the relationship between a factor and an outcome, distorting the real picture.

The influence of confounding factors can be reduced to an acceptable level or completely eliminated: • at the stage of design and data collection by limiting samples or selecting matching pairs (these methods require a separate detailed presentation); • at the stage of data analysis by stratifying the sample according to a characteristic that significantly affects the outcome, or by including multivariate analysis in the statistical processing plan of the obtained data.

It should be noted that stratification in data processing, although it makes it possible to eliminate the influence of a confounder as much as possible, can, however, give rise to another problem - an insufficient sample size in one of the subgroups. If the attribute is unevenly distributed in the sample, for example, there are significantly more men than women, then when stratifying by gender, it may turn out that the subgroup of women is too small to make statistically reliable conclusions. In the case where the researcher assumes in advance that gender (or any other characteristic) may affect the results of the study, it is better to take care of this at the stage of design and inclusion of patients in the study. It is necessary to plan the recruitment of patients in such a way that the sample is balanced on this basis. To do this, the listed methods of restriction and selection of matching pairs are used. When conducting a clinical trial, the most reliable method to protect yourself from confounders is still randomization. Precisely because it preserves the properties of the population in the resulting sample (if its volume is sufficient, of course!), it also preserves the influence of all players in the process under study in an undistorted balance. An added bonus is that this applies to both third-party interventions that we know about and those that we're not even aware of. However, the proportion of clinical trials (experiments with an active role of the researcher) in our country is, unfortunately, small, so the methods of analyzing the results discussed above should become a routine procedure for every researcher.

The presence of an objective connection between a factor and an outcome is just our judgment (conclusion), not a fact, and, like any judgment, it must be supported by third-party evidence and arguments beyond the scope of our research. There are no comprehensive, guaranteed tests to “assess” or “verify” whether there is a natural cause-and-effect relationship between our factor and the outcome. We have this luxury only at the stage of checking the statistical association between the characteristics under study. The conclusion about the existence in reality of a connection between a factor and an outcome should be based on all available information obtained not only during the study, but also from other scientific and practical sources.

At the Harvard University School of Medicine, a set of positive criteria was developed for this purpose, which allows us to systematize all non-statistical arguments in favor of the objective existence of a connection (effect) between the influencing factor and the outcome: • strength of the connection (the numerical value of the obtained statistical coefficient). A strong connection minimizes the possibility of interference from unaccounted and unknown third-party factors (confounders); • completeness of evidence and agreement with other studies. Taking into account all possible nuances in conducting the study and interpreting its results, as well as the consistency of its conclusions with other studies conducted in different conditions, on different populations, etc., allows us to assert that the study’s conclusions reflect the real picture; • biological credibility (plausibility). If it is possible to explain the study's findings by a reasonable biological mechanism, this will strengthen the evidence base for your conclusions; • presence of a dose-dependent effect. If the level of severity of the outcome changes with a change in the dose of exposure, then this also indicates an objective connection between them.

The final step in interpreting study results is to determine the limits in the population to which we can extend our conclusions [5].

Strictly speaking, our conclusions are demonstrably valid only for the population that we designated as the subject of the study and described it in the inclusion and exclusion criteria in the “Materials and Methods” section. However, the professional community cannot afford to conduct dozens of identical studies on different populations under different conditions in order to obtain guaranteed conclusions. The determination of external validity, that is, to which patients to what acceptable limits the conclusions and recommendations of the study can be extended, is entirely within the competence of the author of the work. Only a specialist and professional can successfully conduct such an examination, relying on their knowledge and practical experience.

All the described stages and recommendations are relevant not only for conducting your own research, they are also useful for critical evaluation and comprehensive analysis of new information obtained from reading scientific articles and monographs.

Contact information: Tikhova Galina Petrovna, e-mail

L I T E R A T U R A 1. Tikhova G. P. Methodology for planning a clinical trial. Question N1: How to determine the required sample size? Regional anesthesia and treatment of acute pain. 2014; 8(3): 57–63. 2. Tikhova G. P. The meaning and interpretation of the error of the mean in clinical research and experiment. Regional anesthesia and treatment of acute pain. 2013; 7(3): 50-3. 3. Lagakos SW The Challenge of Subgroup Analyzes - Reporting without Distorting N Engl J Med. 2006; 354:1667—1669. 4. Tikhova G. P. Calculation and interpretation of relative risk and other statistical parameters obtained from a four-field frequency table. Regional anesthesia and treatment of acute pain. 2012; 6(3): 69–75. 5. Rothwell R. M. External validity of randomized controlled trials: “to whom should the results of this trial apply?” Lancet. 2005; 365(9453): 82-93. 6. Pannucci CJ, Wilkins EG Identifying and Avoiding Bias in Research. Plast Reconstr Surg. 2010; 126(2): 619–25.

Published in the journal Regional Anesthesia and Acute Pain Management.
201S; T. IX, No. 3: 62-9. Key words:
confounding factor, clinical trial, bias error, statistical significance of the hypothesis
Author(s):
Tikhova G. P.
Medical institution: Karelian Scientific Center of the Russian Academy of Sciences (Petrozavodsk)

How to design a table

Before including a table, indicate its name, along with its serial number and right alignment. Among the wishes and requirements:

  • compactness of tables (prefer several small ones to one large one);
  • lines, columns and their names - make them short and succinct, avoiding abbreviations;
  • One cell of the table contains one number; there should be no empty cells.

If there are some notes in the table, write them in a font that is 1 or 2 values ​​smaller in size.

Definition of the word “Interpretation” according to TSB:

Interpretation - Interpretation (lat. interpretatio) interpretation, explanation, clarification. 1) In the literal sense, the term “I.” used in jurisprudence (for example, the interpretation of a law by a lawyer or a judge is a “translation” of “special” expressions in which this or that article of the code is formulated into “common” language, as well as recommendations for its application), art (I. roles an actor or a musical work by a pianist - an individual interpretation by the performer of the work being performed, which is not, generally speaking, uniquely determined by the author’s intention) and in other areas of human activity. 2) I. in mathematics, logic, methodology of science, theory of knowledge - a set of meanings (meanings) attached in one way or another to elements (expressions, formulas, symbols, etc.) of any natural science or abstract-deductive theory (in In the same cases when the elements of this theory themselves are subjected to such “interpretation,” they also speak of the I. of symbols, formulas, etc.). The concept of "I." has great epistemological significance: it plays an important role in comparing scientific theories with the areas they describe, in describing different ways of constructing a theory, and in characterizing changes in the relationship between them in the course of the development of knowledge. Since each natural science theory is conceived and constructed to describe a certain area of ​​real reality, this reality serves as its (theory’s) “natural” information. But such “implicit” information is not the only one possible even for meaningful theories of classical physics and mathematics. Thus, from the fact of isomorphism of mechanical and electrical oscillatory systems described by the same differential equations, it immediately follows that for such equations at least two different Is are possible. This applies to an even greater extent to abstract-deductive logical-mathematical theories, admitting not only different, but also non-isomorphic I. It is generally difficult to talk about their “natural” I. Abstract-deductive theories can do without “translating” their concepts into “physical language”. For example, regardless of any physical geometry, the concepts of Lobachevsky geometry can be interpreted in terms of Euclidean geometry (see Lobachevsky geometry). The discovery of the possibility of mutual interpretability of various deductive theories played a huge role both in the development of the deductive sciences themselves (especially as a tool for proving their relative consistency) and in the formation of modern epistemological concepts associated with them. See Axiomatic method, Logic, Logical semantics, Model. Lit.: Hilbert D., Foundations of Geometry, trans. from German, M.-L., 1948, ch. 2, § 9. Kleene S.K., Introduction to Metamathematics, trans. from English, M., 1957, ch. 3, § 15. A. Church, Introduction to Mathematical Logic, vol. 1, trans. from English, M., 1960, Introduction, § 07. Frenkel A., Bar-Hillel I., Foundations of set theory, trans. from English, M., 1966, ch. 5, § 3. Yu. A. Gastev. Interpretation of programming languages, one of the methods for implementing programming languages ​​on electronic computers (computers). In I., each elementary action in a language corresponds, as a rule, to its own program that implements this action, and the entire process of solving a problem is a computer simulation of the corresponding algorithm written in this language. With I., the speed of solving problems is usually much lower than with other methods, but I. is easier to implement on a computer, and in many cases (for example, when simulating the operation of one computer on another) it turns out to be the only suitable one.

How to design graphic materials

We list the details that are important when adding illustrations to the text of a work:

  • presence of a signature indicating the serial number located below;
  • center alignment of both the signature and the drawing;
  • mandatory presence of symbols;
  • placement of names of charts, graphs and other images under the material itself;
  • constant indication of the names of semantic axes and units of measurement;
  • placing questionnaires, questionnaires and drawings of subjects in the appendix due to the large volume of text.

Never underestimate the importance of analyzing and reporting research results. Without this, all work immediately loses its meaning and is deprived of its logical conclusion.

Using this link you can place an order for our authors to write a term paper on any subject!

What it is?

Translated from Latin, the word interpretatio means explanation, interpretation).

This definition of the term gives us an explanatory philosophical dictionary. In humanitarian knowledge, it is used in a meaning close to the word “understanding”

Synonyms for interpretation:

  1. interpretation;
  2. a comment;
  3. clarification of meaning;
  4. transcript.

Perceiving information from the environment, each person analyzes it in his own way. Of course, there are ideas and concepts that are common to everyone, but since all people have individual thinking, the same phenomena are interpreted differently.

Often this process occurs unconsciously (at the level of sensations, moral norms, rules of behavior laid down in childhood, and worldview). When a person uses his knowledge to decipher any data, the interpretation is directional in nature (for example, translations of texts from foreign or complex scientific languages ​​into a native or easier to understand language).

You can interpret anything: information, events, dreams, laws, musical and literary works, films and even analyses.

Exact sciences

In mathematics and other sciences, some interpretation is always implied. Any mathematical theory is based on things that do not need explanation or proof from the very beginning. The simplest example of such a logical structure is Euclidean geometry, which bases its entire base of theorems on several axioms. Each subsequent theorem builds on the previous one. Such a ladder clearly shows the interpretation of theoretical constructs characteristic of modern science in general. The simplicity of the discoveries of the late Renaissance is a thing of the past - since the 19th century, any mathematical discovery began with some assumption that did not require proof. This is how the geometry of Lobachevsky and Riemann arose. Nowadays, interpretation is the operating principle of applied mathematics, which, acting on specified principles, is capable of solving problems of a very high order.

What is interpreted in psychology

This concept also has its meaning in psychology.

If you give a person the opportunity to explain the meaning of his experiences, you can interpret the problem simultaneously from the conscious and unconscious. Sometimes people become hostage to their stereotypes, because of which they cannot see the forest for the trees.

Many psychologists work with drawings and tests. Based on the results obtained, conclusions are drawn about the patient’s mental health state.

Attention is paid to details, which are drawn differently for each client.

Dream interpretation is a rather complex method that belongs to the field of social psychoanalysis. To correctly interpret the meanings of sleep, the help of a specialist is necessary. Its decoding allows us to expand our understanding of the sphere of unconscious perception of the surrounding world.

Results

Let's return to our interpreters. The word “interpretable” is also used in modern colloquial speech. This concept is interpreted as “becoming clear to understanding.” It is in this sense that the word is used in everyday communication. Even the profession of “interpreter” appeared. This is an engineer who analyzes the entire array of data necessary to control mining. Such a varied use of a well-known word may lead to the emergence of other meanings of the word “interpreter”. But how far the new values ​​will be from the initial ones - the future will show.

Formula No. 10.9

If β is equal to zero, this means that the original sample (its histogram) is symmetric: β=0

If β is greater than zero, then the sample is said to have positive or right skewness, that is, a wider range of values ​​is located to the right of the sampling mode: β>0

If β is less than zero, then the sample is said to have negative or left skewness, that is, a wider range of values ​​is located to the left of the sampling mode: β<0

Excess!

One of the measures of variability is kurtosis, which allows us to characterize the degree of peakedness (sharpness) of the distribution of sample elements, that is, an analogue of a histogram. Typically kurtosis is denoted (γ -sigma) and is calculated using the following formula:

Formula No. 10.10

If gamma is greater than zero, then the original data are said to correspond to a peak distribution: γ>0.

If gamma is less than zero, then the original data are said to correspond to a flat top distribution: γ<0.

If gamma is equal to zero, then the original data are said to correspond to the mean-vertex distribution (the normal distribution has this property): γ=0.

If β equals zero, then this means that the original sample (its histogram) is symmetric: β=0 If β is greater than zero, then the sample is said to have a positive or...

Rating
( 1 rating, average 5 out of 5 )
Did you like the article? Share with friends:
For any suggestions regarding the site: [email protected]
Для любых предложений по сайту: [email protected]