dataset
stringclasses 24
values | id
stringlengths 17
53
| messages
listlengths 2
2
|
|---|---|---|
science.data_reco_mcq_sc
|
science.data_reco_mcq_sc.10715
|
[
{
"content": "Given a short query with keyphrases about a research idea in 2014 in the domain of artificial intelligence, recommend the relevant datasets to validate the research idea from options provided. A short description of the datasets is also provided to help you.\n\nQuery: I want to improve a state-of-the-art depth ordering algorithm for single\n\nKeyphrases: depth ordering single images\n\nDataset description:\nRFW: To validate the racial bias of four commercial APIs and four state-of-the-art (SOTA) algorithms.\nBSDS500: Berkeley Segmentation Data Set 500 (BSDS500) is a standard benchmark for contour detection. This dataset is designed for evaluating natural edge detection that includes not only object contours but also object interior boundaries and background boundaries.\nDialogue State Tracking Challenge: The Dialog State Tracking Challenges 2 & 3 (DSTC2&3) were research challenge focused on improving the state of the art in tracking the state of spoken dialog systems. State tracking, sometimes called belief tracking, refers to accurately estimating the user's goal as a dialog progresses.\nHollywood 3D dataset: A dataset for benchmarking action recognition algorithms in natural environments, while making use of 3D information. The dataset contains around 650 video clips, across 14 classes.\n\nOptions:- A: RFW B: BSDS500 C: Dialogue State Tracking Challenge D: Hollywood 3D dataset",
"role": "user"
},
{
"content": "BSDS500",
"role": "assistant"
}
] |
science.ddi_ner
|
science.ddi_ner.13
|
[
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Brand\", \"UnapprovedDrug\", \"Drug\", \"DrugGroup\"].\n\nPlease return the output as a JSON object of the format: {\"Brand\" : [\"Aspirin\", ...], \"DrugGroup\": [\"Salicyclate\", ...], \"Drug\": [\"Phenobarbital\", ...], \"UnapprovedDrug\": [\"Heroin\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nBefore using this medication, tell your doctor or pharmacist of all prescription and nonprescription products you may use, especially of: aminoglycosides (e.g., gentamicin, amikacin), amphotericin B, cyclosporine, non-steroidal anti-inflammatory drugs (e.g., ibuprofen), tacrolimus, vancomycin. Do not start or stop any medicine without doctor or pharmacist approval.",
"role": "user"
},
{
"content": "{\"Brand\": [], \"Drug\": [\"cyclosporine\", \"gentamicin\", \"vancomycin\", \"amikacin\", \"ibuprofen\", \"amphotericin B\", \"tacrolimus\"], \"DrugGroup\": [\"non-steroidal anti-inflammatory\", \"aminoglycosides\"], \"UnapprovedDrug\": []}",
"role": "assistant"
}
] |
science.bioasq_factoid_qa
|
science.bioasq_factoid_qa.921
|
[
{
"content": "You will be shown biomedical passages and your task is to the answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: Name a CFL2 mutation which is associated with nemaline myopathy?\n\nContext: Because a mutation in CFL2 was identified in a family with nemaline myopathy, we performed sequence analysis of the gene and a novel homozygous missense mutation in exon 2 (c.19G>A, p.Val7Met) of CFL2 was identified in both siblings.",
"role": "user"
},
{
"content": "c.19G>A, or p.Val7Met",
"role": "assistant"
}
] |
science.medmentions_ner
|
science.medmentions_ner.1817
|
[
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"HealthCareActivity\", \"InjuryOrPoisoning\", \"BodySubstance\", \"IntellectualProduct\", \"AnatomicalStructure\", \"SpatialConcept\", \"Chemical\", \"Bacterium\", \"MedicalDevice\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Finding\", \"BiologicFunction\", \"Virus\", \"ResearchActivity\", \"ClinicalAttribute\", \"PopulationGroup\", \"Eukaryote\", \"BodySystem\", \"Food\", \"ProfessionalOrOccupationalGroup\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nBlubber transcriptome response to acute stress axis activation involves transient changes in adipogenesis and lipolysis in a fasting -adapted marine mammal Stress can compromise an animal's ability to conserve metabolic stores and participate in energy -demanding activities that are critical for fitness. Understanding how wild animals, especially those already experiencing physiological extremes (e.g. fasting), regulate stress responses is critical for evaluating the impacts of anthropogenic disturbance on physiology and fitness, key challenges for conservation. However, studies of stress in wildlife are often limited to baseline endocrine measurements and few have investigated stress effects in fasting -adapted species. We examined downstream molecular consequences of hypothalamic-pituitary-adrenal (HPA) axis activation by exogenous adrenocorticotropic hormone (ACTH) in blubber of northern elephant seals due to the ease of blubber sampling and its key role in metabolic regulation in marine mammals. We report the first phocid blubber transcriptome produced by RNAseq, containing over 140,000 annotated transcripts, including metabolic and adipocytokine genes of interest. The acute response of blubber to stress axis activation, measured 2 hours after ACTH administration, involved highly specific, transient (lasting <24 hours) induction of gene networks that promote lipolysis and adipogenesis in mammalian adipocytes. Differentially expressed genes included key adipogenesis factors which can be used as blubber -specific markers of acute stress in marine mammals of concern for which sampling of other tissues is not possible.",
"role": "user"
},
{
"content": "{\"AnatomicalStructure\": [\"adipocytes\", \"tissues\", \"metabolic\", \"genes\", \"Blubber\", \"adipocytokine genes\", \"blubber\"], \"Bacterium\": [], \"BiologicFunction\": [\"activation\", \"regulate\", \"lipolysis\", \"expressed\", \"physiology\", \"stress axis activation\", \"adipogenesis\", \"gene networks\", \"acute stress axis activation\", \"stress responses\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [\"hypothalamic-pituitary-adrenal (HPA) axis\"], \"Chemical\": [\"ACTH\", \"adrenocorticotropic hormone\", \"transcripts\"], \"ClinicalAttribute\": [\"markers\"], \"Eukaryote\": [\"animal\\u0027s\", \"wildlife\", \"marine mammals\", \"mammalian\", \"wild animals\", \"marine mammal\", \"phocid\", \"elephant seals\"], \"Finding\": [\"stress\", \"fasting\", \"physiological extremes\", \"Stress\"], \"Food\": [], \"HealthCareActivity\": [\"RNAseq\", \"administration\", \"downstream molecular consequences\", \"baseline endocrine measurements\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"species\"], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [], \"SpatialConcept\": [\"transcriptome\", \"northern\"], \"Virus\": []}",
"role": "assistant"
}
] |
science.scitldr_aic
|
science.scitldr_aic.338
|
[
{
"content": "Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.\n\nPaper text:\n\nOver the past four years, neural networks have been proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions.\nWe show that adversarial vulnerability increases with the gradients of the training objective when viewed as a function of the inputs.\nFor most current network architectures, we prove that the L1-norm of these gradients grows as the square root of the input size.\nThese nets therefore become increasingly vulnerable with growing image size.\nOur proofs rely on the network’s weight distribution at initialization, but extensive experiments confirm that our conclusions still hold after usual training.\nFollowing the work of BID7 , Convolutional Neural Networks (CNNs) have been found vulnerable to adversarial examples: an adversary can drive the performance of state-of-the art CNNs down to chance level with imperceptible changes of the inputs.\nA number of studies have tried to address this issue, but only few have stressed that, because adversarial examples are essentially small input changes that create large output variations, they are inherently caused by large gradients of the neural network with respect to its inputs.\nOf course, this view, which we will focus on here, assumes that the network and loss are differentiable.\nIt has the advantage to yield a large body of specific mathematical tools, but might not be easily extendable to masked gradients, non-smooth models or the 0-1-loss.\nNevertheless, our conclusions might even hold for non-smooth models, given that the latter can often be viewed as smooth at a coarser level.Contributions.\nMore specifically, we provide theoretical and empirical arguments supporting the existence of a monotonic relationship between the gradient norm of the training objective (of a differentiable classifier) and its adversarial vulnerability.\nEvaluating this norm based on the weight statistics at initialization, we show that CNNs and most feed-forward networks, by design, exhibit increasingly large gradients with input dimension d, almost independently of their architecture.\nThat leaves them increasingly vulnerable to adversarial noise.\nWe corroborate our theoretical results by extensive experiments.\nAlthough some of those experiments involve adversarial regularization schemes, our goal is not to advocate a new adversarial defense (these schemes are already known), but to show how their effect can be explained by our first order analysis.\nWe do not claim to explain all aspects of adversarial vulnerability, but we claim that our first order argument suffices to explain a significant part of the empirical findings on adversarial vulnerability.\nThis calls for researching the design of neural network architectures with inherently smaller gradients and provides useful guidelines to practitioners and network designers.\nFor differentiable classifiers and losses, we showed that adversarial vulnerability increases with the gradients ∂ x L of the loss, which is confirmed by the near-perfect functional relationship between gradient norms and vulnerability FIG1 We then evaluated the size of ∂ x L q and showed that, at initialization, usual feed-forward nets (convolutional or fully connected) are increasingly vulnerable to p -attacks with growing input dimension d (the image-size), almost independently of their architecture.\nOur experiments show that, on the tested architectures, usual training escapes those prior gradient (and vulnerability) properties on the training, but not on the test set.\nBID14 suggest that alleviating this generalization gap requires more data.\nBut a natural (complementary) alternative would be to search for architectures with naturally smaller gradients, and in particular, with well-behaved priors.\nDespite all their limitations (being only first-order, assuming a prior weight-distribution and a differentiable loss and architecture), our theoretical insights may thereby still prove to be precious future allies.",
"role": "user"
},
{
"content": "Neural nets have large gradients by design; that makes them adversarially vulnerable.",
"role": "assistant"
}
] |
science.qasa_abstractive_qa
|
science.qasa_abstractive_qa.1009
|
[
{
"content": "You will be shown an excerpt from a computer science scientific research paper, followed by a question about the paper. Please answer the question. Do not include any text in your response other than the answer.\n\nContext: One of the main benefits of transfer learning is being able to train a model for a task with a small number of labels. We evaluate ULMFiT on different numbers of labeled examples in two settings: only labeled examples are used for LM fine-tuning (‘supervised’); and all task data is available and can be used to fine-tune the LM (‘semi-supervised’). We compare ULMFiT to training from scratch—which is necessary for hypercolumn-based approaches. We split off balanced fractions of the training data, keep the validation set fixed, and use the same hyperparameters as before. We show the results in Figure 3.\n\nQuestion: Is weak supervision a subset or type of regular supervised learning?",
"role": "user"
},
{
"content": "In this work, fine-tuning an LM only on labeled examples is considered to be supervised learning. The general definitions of supervised learning and weak supervision, and whether weak supervision is a subset of regular supervised learning cannot be answered in this paper.",
"role": "assistant"
}
] |
science.linnaeus_ner
|
science.linnaeus_ner.34
|
[
{
"content": "You will be shown the full-text from a biomedical research paper. Given this text, your task is to extract all unique entities of the following types: [\"Species\"].\n\nPlease return the output as a JSON object of the format: {\"Species\": [\"rat\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nDocument text:\n\nPiezoelectric osteotomy in hand surgery: first experiences with a new technique\nAbstract\nBackground\nIn hand and spinal surgery nerve lesions are feared complications with the use of standard oscillating saws. Oral surgeons have started using a newly developed ultrasound bone scalpel when performing precise osteotomies. By using a frequency of 25–29 kHz only mineralized tissue is cut, sparing the soft tissue. This reduces the risk of nerve lesions. As there is a lack of experience with this technique in the field of orthopaedic bone surgery, we performed the first ultrasound osteotomy in hand surgery.\n\nMethod\nWhile performing a correctional osteotomy of the 5th metacarpal bone we used the Piezosurgery® Device from Mectron [Italy] instead of the usual oscillating saw. We will report on our experience with one case, with a follow up time of one year.\n\nResults\nThe cut was highly precise and there were no vibrations of the bone. The time needed for the operation was slightly longer than the time needed while using the usual saw. Bone healing was good and at no point were there any neurovascular disturbances.\n\nConclusion\nThe Piezosurgery® Device is useful for small long bone osteotomies. Using the fine tip enables curved cutting and provides an opportunity for new osteotomy techniques. As the device selectively cuts bone we feel that this device has great potential in the field of hand- and spinal surgery.\n\n\n\nBackground\nFor osteotomies of the hand oscillating saws are usually used [1]. Even though they are varied in size, they are not very precise for use in the vicinity of nerves and arteries. They also pose problems while being used in conjunction with magnification, as one's range of sight and focus is restricted when wearing magnifying glasses. For that reason oral surgeons have moved to using the newly developed piezoelectrical bone scalpel when operating in the near vicinity of nerves or arteries. The tip of this instrument oscillates in the frequency of ultrasound [2]. The mechanism of this device is based on the so called Piezo – Effect. French Physicists Jean and Marie Curie first mentioned the direct Piezo-Effect 1880, whereby certain crystals produce electrical current while under mechanical pressure. The reciprocal effect, by which the crystals are deformed when under electrical current, was then discovered a while later. This is the effect being used by the Piezosurgery Device®. In this device, the electrical field is located in the handle of the saw [3]. Due to the deformation caused by the electrical current, a cutting – hammering movement is produced at the tip of the instrument. These micro movements are in the frequency range of 25 to 29 kHz and, depending on the insert, with an amplitude of 60 to 210 μm. This way only mineralized tissue is selectively cut. Neurovascular tissue and other soft tissue would only be cut by a frequency of above 50 kHz [3-5]. Depending on the strength of the bone and the blade geometry, the efficiency of the cutting can be regulated by the frequency modulator and the power level. For cooling there is an integrated pump with five different working levels. This pump automatically washes physiological solution to the area being cut. The cost of the device is about 7.000 USD. Additional costs per operation are for the cooling liquid and are in the range of a few dollars. We have used the Piezosurgery Device® by Mectron [Italy] [3] for the first time in osteotomies of the long bone in the field of hand surgery. We will report on our experience with one case, with a follow up time of one year.\n\nMethod\nThe correctional osteotomy was performed on a 23 year old worker who suffered a malunited metacarpal bone fracture of the fifth finger on his dominant hand. The X-ray revealed a 45 degree angular deformity of the fifth metacarpal neck with internal rotation. (Figure 1). The operation was performed under regional anesthesia. A longitudinal incision was made over the fifth metacarpal. The tendon of the extensor digiti minimi was found and on its radial side the periosteum of metacarpal five was reached. The periosteum was opened longitudinally over the defect as usual. For the correction of the defect of 45 degrees, a bone wedge was excised. Instead of using the traditional oscillating saw, the Piezosurgery Device® [3] was used (Figure 2). We used a sharp hardened saw coated with titannitrid (Figure 3). For most of the surgery the highest power level, the boosted burst c, was used. We set the automatic cooling of the area with water to its highest level. The angulation and rotation was corrected and fixed with a 1.5 mm titanium five-hole plate and four screws. Closure of the wound was done in layers. Mobilization was started on the 10th postoperative day. The overall time of observation was one year.\n\nResults and discussion\nThe Piezosurgery® Device is ideally sized for hand surgery. The cutting was very precise. The edges of the osteotomy were all sharp to the edge, there was no need to split the bone with a chisel, nor was there the danger of a break out. During the osteotomy there were no disturbing vibrations in the area of operation. This absence of vibration is very practical for operations using a magnifier. Vercellotti mentions that to overcome any problems during surgery, instead of increasing pressure on the hand piece, as in traditional techniques, it is necessary to find the correct pressure to achieve the desired result. With piezoelectric surgery, increasing the working pressure above a certain limit impedes the vibrations of the insert [4]. We have also experienced this in our study. The instrument can be moved in all directions comparable to a pen. The tip of the instrument is exchangeable. Using the fine tip enables multiplanar as well as curved cutting. Because of the automatic water cooling during the whole procedure, there is always a clear view onto the object. This is something oral surgeons found especially useful [6]. The authors mention that the downside of the device is the relative slow sawing process. We needed about 30 seconds for one cut of the relatively small bone. This is about 20 seconds longer than the time needed for cutting with the usual saw. Although the power can be regulated with the power box and the use of different scalpels, we agree with other authors that the optimal use of this device is in surgeries of small bones where precise and soft tissue friendly cutting is required [7]. As other literature has shown, the device selectively cuts bone while sparing nerves and other soft tissue [2,3]. This allows for minimal invasive surgeries with limited retraction of soft tissue and minimal stripping of the periosteum, saves time and might have a positive effect on the healing process. Our aim of the first time use of the Piezosurgery® Device in hand surgery was to check its usability in osteotomies of tubular bones. The preparation of the bone was done in the usual manner as is done when cutting with an oscillating saw. The reason for this was to fully visualize the cutting process using this new device, although in the future, it should be possible to minimize the bony exposure. In our patient the postoperative healing of the wound and the bone consolidation (Figure 4) were smooth. The duration of postoperative sick leave was four weeks which is more rapid than the usual recovery period. The patient regained full use of his finger according to the state before the fracture. At no point was there any loss of sensitivity. The patient as well as the surgeons were fully satisfied with the result.\n\nConclusion\nThe Piezosurgery® Device is a useful device for small long bone osteotomies. We feel that this device has great potential in the field of hand- and spinal surgery. As the device selectively cuts bone, considerable nerve lesions can be avoided and minimal invasive surgeries are possible. Using the fine tip enables curved cutting and provides an opportunity for new osteotomy techniques.\n\nCompeting interests\nThe author(s) declare that they have no competing interests.\n\nAuthors' contributions\nDJH initiated and coordinated the new application of Piezosurgery® device and wrote the publication.\nStSt lead the osteotomy as he was experienced with this tool from oral surgery. He played a major part in writing the technical aspects.\nOVK was the treating surgeon, performed the operation and evaluated the new tool.\nSS performed a literature review and wrote part of the publication.\nPH was the treating chief surgeon, evaluated the new tool and lead the treatment in all aspects.\n\nConsent\nWe obtained oral consent from the patient but could not obtain written consent.\n\nPre-publication history\nThe pre-publication history for this paper can be accessed here:",
"role": "user"
},
{
"content": "{\"Species\": [\"patient\"]}",
"role": "assistant"
}
] |
science.anat_em_ner
|
science.anat_em_ner.549
|
[
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"AnatomicalEntity\"].\n\nPlease return the output as a JSON object of the format: {\"AnatomicalEntity\": [\"tumor cells\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nEffects of doxycycline on in vitro growth, migration, and gelatinase activity of breast carcinoma cells.\nMetastatic disease is one of the major causes of death from cancer in human beings. Several enzyme systems have been implicated in the metastatic process, but the metalloproteinases (MPs) appear to be the major group involved in most instances of neoplastic invasion. Increased MP activity has been correlated with the metastatic potential of many cancers, including breast cancer. MPs also play a role in tumor angiogenesis. Tetracyclines are antimicrobial agents that can suppress MP activity in a variety of tissues, including gingiva, bone, and cartilage. Several reports have indicated that tetracyclines can suppress tumor MPs as well. A synthetic tetracycline, doxycycline, inhibits migration of human MDA-MB-435 breast adenocarcinoma cells through a reconstituted basement membrane (Matrigel), an assay used as an in vitro surrogate for the in vivo process of tumor invasion through basement membranes. Additionally, doxycycline diminishes the proliferation of this breast cancer cell line and also decreases its gelatinolytic activity, as determined by gel zymography.",
"role": "user"
},
{
"content": "{\"AnatomicalEntity\": [\"breast cancer cell line\", \"MDA-MB-435 breast adenocarcinoma cells\", \"cancer\", \"tissues\", \"cancers\", \"basement membranes\", \"tumor\", \"breast cancer\", \"cartilage\", \"gingiva\", \"neoplastic\", \"breast carcinoma cells\", \"bone\", \"basement membrane\"]}",
"role": "assistant"
}
] |
science.pubmedqa_qa
|
science.pubmedqa_qa.383
|
[
{
"content": "You will be shown an abstract from a research paper, and a corresponding research question. Determine the answer to the research question based on the context provided. Your answer should be one of: \"YES\", \"NO\", or \"MAYBE\".\n\nContext:\n(BACKGROUND) An increasingly significant public health issue in Canada, and elsewhere throughout the developed world, pertains to the provision of adequate palliative/end-of-life (P/EOL) care. Informal caregivers who take on the responsibility of providing P/EOL care often experience negative physical, mental, emotional, social and economic consequences. In this article, we specifically examine how Canada's Compassionate Care Benefit (CCB)--a contributory benefits social program aimed at informal P/EOL caregivers--operates as a public health response in sustaining informal caregivers providing P/EOL care, and whether or not it adequately addresses known aspects of caregiver burden that are addressed within the population health promotion (PHP) model.\n(METHODS) As part of a national evaluation of Canada's Compassionate Care Benefit, 57 telephone interviews were conducted with Canadian informal P/EOL caregivers in 5 different provinces, pertaining to the strengths and weaknesses of the CCB and the general caregiving experience. Interview data was coded with Nvivo software and emerging themes were identified by the research team, with such findings published elsewhere. The purpose of the present analysis was identified after comparing the findings to the literature specific to caregiver burden and public health, after which data was analyzed using the PHP model as a guiding framework.\n(RESULTS) Informal caregivers spoke to several of the determinants of health outlined in the PHP model that are implicated in their burden experience: gender, income and social status, working conditions, health and social services, social support network, and personal health practises and coping strategies. They recognized the need for improving the CCB to better address these determinants.\n\nQuestion:\nCanada's Compassionate Care Benefit: is it an adequate public health response to addressing the issue of caregiver burden in end-of-life care?\n\nAnswer:",
"role": "user"
},
{
"content": "NO",
"role": "assistant"
}
] |
science.bioasq_general_qa
|
science.bioasq_general_qa.1048
|
[
{
"content": "You will be shown a question, followed by excerpts from biomedical research papers. Please answer the question based on the provided context. Do not include any text in your response other than the answer.\n\nQuestion: What is the function of a DEAD box protein?\n\nContext: DEAD-box helicase proteins perform ATP-dependent rearrangements of structured RNAs throughout RNA biology.\nDEAD-box proteins are ubiquitous in RNA-mediated processes and function by coupling cycles of ATP binding and hydrolysis to changes in affinity for single-stranded RNA. Many DEAD-box proteins use this basic mechanism as the foundation for a version of RNA helicase activity\nDEAD-box proteins as RNA helicases and chaperones.\nRNA helicases of the DEAD-box and related families have been found to be required for all processes involving RNA molecules\nDDX3 belongs to the DEAD-box proteins, a large family of ATP-dependent RNA helicases that participate in all aspects of RNA metabolism\nDEAD-box proteins catalyze the ATP-dependent unwinding of RNA duplexes and accompany RNA molecules throughout their cellular life.\nDEAD-box (DDX) proteins belong to the largest subfamily of RNA helicase SF2, which contributes to all biological processes of RNA metabolism\nDDX3Y (also known as DBY) is a member of the DEAD box protein family, which is involved in ATP-dependent RNA unwinding, needed in a variety of cellular processes including splicing, ribosome biogenesis and RNA degradation.\nVASA is a member of the DEAD-box protein family that plays an indispensable role in mammalian spermatogenesis, particularly during meiosis.\nBACKGROUND: Vasa is a member of the DEAD-box protein family that plays an indispensable role in mammalian spermatogenesis, particularly during meiosis.\nThe yeast DEAD box protein Mss116p is a general RNA chaperone that functions in mitochondrial group I and II intron splicing, translational activation, and RNA end processing.\nDEAD box proteins are putative RNA helicases that function in all aspects of RNA metabolism, including translation, ribosome biogenesis, and pre-mRNA splicing. B\nrom unwinding RNA duplexes, which established these proteins as RNA helicases, DEAD box proteins have been shown to also catalyse RNA annealing and to displace proteins from RNA. DEAD bo\nDDX3Y (also known as DBY) is a member of the DEAD box protein family, which is involved in ATP-dependent RNA unwinding, needed in a variety of cellular processes including splicing, ribosome biogenesis and RNA degradation. In\n'DEAD/H' family of putative RNA helicases may help regulate these processes by utilizing intrinsic RNA-dependent ATPase activity to catalyze conformational changes in RNA secondary structure. To\nDEAD-box helicases catalyze the ATP-dependent unwinding of RNA duplexes.\nA DEAD-box protein functions as an ATP-dependent RNA chaperone in group I intron splicing.\nDEAD-box proteins are nonprocessive RNA helicases that play diverse roles in cellular processes.\nA DEAD-box protein alone promotes group II intron splicing and reverse splicing by acting as an RNA chaperone.\nDEAD/H-box proteins are involved in various aspects of RNA metabolism.\nOur results demonstrate that a DExH/D-box protein has a specific, physiologically relevant chaperone function in the folding of a natural RNA substrate.\nDEAD-box proteins are the largest family of nucleic acid helicases, and are crucial to RNA metabolism throughout all domains of life.\nHere, structural, biochemical and genetic analyses of the yeast DEAD-box protein Mss116p indicate that the helicase core domains have modular functions that enable a novel mechanism for RNA-duplex recognition and unwinding.\nThe DEAD-box RNA helicase family comprise enzymes that participate in every aspect of RNA metabolism, associated with a diverse range of cellular functions including response to abiotic stress.\nThe Thermus thermophilus DEAD-box protein Hera is a general RNA binding protein and plays a key role in tRNA metabolism.\nDEAD-box proteins, a large class of RNA-dependent ATPases, regulate all aspects of gene expression and RNA metabolism.\nDEAD box proteins are putative RNA helicases that have been implicated in cellular processes involving alteration of RNA secondary structure, such as translation initiation and splicing.\nDEAD box proteins catalyze the ATP-dependent unwinding of double-stranded RNA (dsRNA).\nDEAD-box proteins are ATP-dependent RNA helicases that function in various stages of RNA processing and in RNP remodeling.\nDEAD-box proteins are ubiquitous in RNA-mediated processes and function by coupling cycles of ATP binding and hydrolysis to changes in affinity for single-stranded RNA.\nDEAD-box proteins ordinarily function as constituents of enormous multi-protein complexes and it is believed that interactions with other components in the complexes might be answerable for the various capacities ascribed to these proteins.\nThere is emerging evidence that DEAD-box family of RNA helicases play pivotal functions in various cellular processes and in numerous cases have been embroiled in cellular proliferation and/or neoplastic transformation.\nThis activity, coupled with mechanisms to direct different DEAD-box proteins to their physiological substrates, allows them to promote RNA folding steps and rearrangements and to accelerate remodeling of RNA–protein complexes.\nMany DEAD-box proteins use this basic mechanism as the foundation for a version of RNA helicase activity, efficiently separating the strands of short RNA duplexes in a process that involves little or no translocation.",
"role": "user"
},
{
"content": "DEAD-box helicases are ubiquitous in RNA-mediated processes and function by coupling cycles of ATP binding and hydrolysis to changes in affinity for single-stranded RNA. DEAD-box helicase proteins perform ATP-dependent rearrangements of structured RNAs DEAD-box (DDX) proteins belong to the largest subfamily of RNA helicase SF2, which contributes to all biological processes of RNA metabolism DEAD-box proteins catalyze the ATP-dependent unwinding of RNA duplexes and accompany RNA molecules throughout their cellular life.",
"role": "assistant"
}
] |
science.qasper_extractive_qa
|
science.qasper_extractive_qa.248.0
|
[
{
"content": "You will be shown sections from a scientific research paper, together with a question about the paper. Your task is to answer the question based on the contents of the paper. Your response should be a `json` object with two fields:\n\n\"answer\": An array of strings extracted directly from the paper which, collectively, answer the question.\n\"evidence\": An array of strings. Each should be an excerpt from the paper, in which one or more of the extracted answers can be found.\n\nFor instance, for the question \"What baselines did the authors compare against?\", a sample response might be:\n\n{\n \"answer\": [\"BERT\", \"RoBERTa\"],\n \"evidence\": [\"In our experiments, we compare the performance of our model against BERT and RoBERTa.\"]\n}\n\nDo not include any text in your response other than the json.\n\nIf the question is unanswerable given the provided excerpts, respond with the single word \"null\".\n\nPaper: Recognizing Musical Entities in User-generated Content\n\nRecognizing Musical Entities is important for Music Information Retrieval (MIR) since it can improve the performance of several tasks such as music recommendation, genre classification or artist similarity. However, most entity recognition systems in the music domain have concentrated on formal texts (e.g. artists' biographies, encyclopedic articles, etc.), ignoring rich and noisy user-generated content. In this work, we present a novel method to recognize musical entities in Twitter content generated by users following a classical music radio channel. Our approach takes advantage of both formal radio schedule and users' tweets to improve entity recognition. We instantiate several machine learning algorithms to perform entity recognition combining task-specific and corpus-based features. We also show how to improve recognition results by jointly considering formal and user-generated content\n\nIntroduction\nThe increasing use of social media and microblogging services has broken new ground in the field of Information Extraction (IE) from user-generated content (UGC). Understanding the information contained in users' content has become one of the main goal for many applications, due to the uniqueness and the variety of this data BIBREF0 . However, the highly informal and noisy status of these sources makes it difficult to apply techniques proposed by the NLP community for dealing with formal and structured content BIBREF1 .\nIn this work, we analyze a set of tweets related to a specific classical music radio channel, BBC Radio 3, interested in detecting two types of musical named entities, Contributor and Musical Work.\nThe method proposed makes use of the information extracted from the radio schedule for creating links between users' tweets and tracks broadcasted. Thanks to this linking, we aim to detect when users refer to entities included into the schedule. Apart from that, we consider a series of linguistic features, partly taken from the NLP literature and partly specifically designed for this task, for building statistical models able to recognize the musical entities. To that aim, we perform several experiments with a supervised learning model, Support Vector Machine (SVM), and a recurrent neural network architecture, a bidirectional LSTM with a CRF layer (biLSTM-CRF).\nThe contributions in this work are summarized as follows:\nThe paper is structured as follows. In Section 2, we present a review of the previous works related to Named Entity Recognition, focusing on its application on UGC and MIR. Afterwards, in Section 3 it is presented the methodology of this work, describing the dataset and the method proposed. In Section 4, the results obtained are shown. Finally, in Section 5 conclusions are discussed.\n\nRelated Work\nNamed Entity Recognition (NER), or alternatively Named Entity Recognition and Classification (NERC), is the task of detecting entities in an input text and to assign them to a specific class. It starts to be defined in the early '80, and over the years several approaches have been proposed BIBREF2 . Early systems were based on handcrafted rule-based algorithms, while recently several contributions by Machine Learning scientists have helped in integrating probabilistic models into NER systems.\nIn particular, new developments in neural architectures have become an important resource for this task. Their main advantages are that they do not need language-specific knowledge resources BIBREF3 , and they are robust to the noisy and short nature of social media messages BIBREF4 . Indeed, according to a performance analysis of several Named Entity Recognition and Linking systems presented in BIBREF5 , it has been found that poor capitalization is one of the main issues when dealing with microblog content. Apart from that, typographic errors and the ubiquitous occurrence of out-of-vocabulary (OOV) words also cause drops in NER recall and precision, together with shortenings and slang, particularly pronounced in tweets.\nMusic Information Retrieval (MIR) is an interdisciplinary field which borrows tools of several disciplines, such as signal processing, musicology, machine learning, psychology and many others, for extracting knowledge from musical objects (be them audio, texts, etc.) BIBREF6 . In the last decade, several MIR tasks have benefited from NLP, such as sound and music recommendation BIBREF7 , automatic summary of song review BIBREF8 , artist similarity BIBREF9 and genre classification BIBREF10 .\nIn the field of IE, a first approach for detecting musical named entities from raw text, based on Hidden Markov Models, has been proposed in BIBREF11 . In BIBREF12 , the authors combine state-of-the-art Entity Linking (EL) systems to tackle the problem of detecting musical entities from raw texts. The method proposed relies on the argumentum ad populum intuition, so if two or more different EL systems perform the same prediction in linking a named entity mention, the more likely this prediction is to be correct. In detail, the off-the-shelf systems used are: DBpedia Spotlight BIBREF13 , TagMe BIBREF14 , Babelfy BIBREF15 . Moreover, a first Musical Entity Linking, MEL has been presented in BIBREF16 which combines different state-of-the-art NLP libraries and SimpleBrainz, an RDF knowledge base created from MusicBrainz after a simplification process.\nFurthermore, Twitter has also been at the center of many studies done by the MIR community. As example, for building a music recommender system BIBREF17 analyzes tweets containing keywords like nowplaying or listeningto. In BIBREF9 , a similar dataset it is used for discovering cultural listening patterns. Publicly available Twitter corpora built for MIR investigations have been created, among others the Million Musical Tweets dataset BIBREF18 and the #nowplaying dataset BIBREF19 .\n\nMethodology\nWe propose a hybrid method which recognizes musical entities in UGC using both contextual and linguistic information. We focus on detecting two types of entities: Contributor: person who is related to a musical work (composer, performer, conductor, etc). Musical Work: musical composition or recording (symphony, concerto, overture, etc).\nAs case study, we have chosen to analyze tweets extracted from the channel of a classical music radio, BBC Radio 3. The choice to focus on classical music has been mostly motivated by the particular discrepancy between the informal language used in the social platform and the formal nomenclature of contributors and musical works. Indeed, users when referring to a musician or to a classical piece in a tweet, rarely use the full name of the person or of the work, as shown in Table 2.\nWe extract information from the radio schedule for recreating the musical context to analyze user-generated tweets, detecting when they are referring to a specific work or contributor recently played. We manage to associate to every track broadcasted a list of entities, thanks to the tweets automatically posted by the BBC Radio3 Music Bot, where it is described the track actually on air in the radio. In Table 3, examples of bot-generated tweets are shown.\nAfterwards, we detect the entities on the user-generated content by means of two methods: on one side, we use the entities extracted from the radio schedule for generating candidates entities in the user-generated tweets, thanks to a matching algorithm based on time proximity and string similarity. On the other side, we create a statistical model capable of detecting entities directly from the UGC, aimed to model the informal language of the raw texts. In Figure 1, an overview of the system proposed is presented.\n\nNER system\nAccording to the literature reviewed, state-of-the-art NER systems proposed by the NLP community are not tailored to detect musical entities in user-generated content. Consequently, our first objective has been to understand how to adapt existing systems for achieving significant results in this task.\nIn the following sections, we describe separately the features, the word embeddings and the models considered. All the resources used are publicy available.\nWe define a set of features for characterizing the text at the token level. We mix standard linguistic features, such as Part-Of-Speech (POS) and chunk tag, together with several gazetteers specifically built for classical music, and a series of features representing tokens' left and right context. For extracting the POS and the chunk tag we use the Python library twitter_nlp, presented in BIBREF1 .\nIn total, we define 26 features for describing each token: 1)POS tag; 2)Chunk tag; 3)Position of the token within the text, normalized between 0 and 1; 4)If the token starts with a capital letter; 5)If the token is a digit. Gazetteers: 6)Contributor first names; 7)Contributor last names; 8)Contributor types (\"soprano\", \"violinist\", etc.); 9)Classical work types (\"symphony\", \"overture\", etc.); 10)Musical instruments; 11)Opus forms (\"op\", \"opus\"); 12)Work number forms (\"no\", \"number\"); 13)Work keys (\"C\", \"D\", \"E\", \"F\" , \"G\" , \"A\", \"B\", \"flat\", \"sharp\"); 14)Work Modes (\"major\", \"minor\", \"m\"). Finally, we complete the tokens' description including as token's features the surface form, the POS and the chunk tag of the previous and the following two tokens (12 features).\nWe consider two sets of GloVe word embeddings BIBREF20 for training the neural architecture, one pre-trained with 2B of tweets, publicy downloadable, one trained with a corpora of 300K tweets collected during the 2014-2017 BBC Proms Festivals and disjoint from the data used in our experiments.\nThe first model considered for this task has been the John Platt's sequential minimal optimization algorithm for training a support vector classifier BIBREF21 , implemented in WEKA BIBREF22 . Indeed, in BIBREF23 results shown that SVM outperforms other machine learning models, such as Decision Trees and Naive Bayes, obtaining the best accuracy when detecting named entities from the user-generated tweets.\nHowever, recent advances in Deep Learning techniques have shown that the NER task can benefit from the use of neural architectures, such as biLSTM-networks BIBREF3 , BIBREF4 . We use the implementation proposed in BIBREF24 for conducting three different experiments. In the first, we train the model using only the word embeddings as feature. In the second, together with the word embeddings we use the POS and chunk tag. In the third, all the features previously defined are included, in addition to the word embeddings. For every experiment, we use both the pre-trained embeddings and the ones that we created with our Twitter corpora. In section 4, results obtained from the several experiments are reported.\n\nResults\nThe performances of the NER experiments are reported separately for three different parts of the system proposed.\nTable 6 presents the comparison of the various methods while performing NER on the bot-generated corpora and the user-generated corpora. Results shown that, in the first case, in the training set the F1 score is always greater than 97%, with a maximum of 99.65%. With both test sets performances decrease, varying between 94-97%. In the case of UGC, comparing the F1 score we can observe how performances significantly decrease. It can be considered a natural consequence of the complex nature of the users' informal language in comparison to the structured message created by the bot.\nIn Table 7, results of the schedule matching are reported. We can observe how the quality of the linking performed by the algorithm is correlated to the choice of the three thresholds. Indeed, the Precision score increase when the time threshold decrease, admitting less candidates as entities during the matching, and when the string similarity thresholds increase, accepting only candidates with an higher degree of similarity. The behaviour of the Recall score is inverted.\nFinally, we test the impact of using the schedule matching together with a biLSTM-CRF network. In this experiment, we consider the network trained using all the features proposed, and the embeddings not pre-trained. Table 8 reports the results obtained. We can observe how generally the system benefits from the use of the schedule information. Especially in the testing part, where the neural network recognizes with less accuracy, the explicit information contained in the schedule can be exploited for identifying the entities at which users are referring while listening to the radio and posting the tweets.\n\nConclusion\nWe have presented in this work a novel method for detecting musical entities from user-generated content, modelling linguistic features with statistical models and extracting contextual information from a radio schedule. We analyzed tweets related to a classical music radio station, integrating its schedule to connect users' messages to tracks broadcasted. We focus on the recognition of two kinds of entities related to the music field, Contributor and Musical Work.\nAccording to the results obtained, we have seen a pronounced difference between the system performances when dealing with the Contributor instead of the Musical Work entities. Indeed, the former type of entity has been shown to be more easily detected in comparison to the latter, and we identify several reasons behind this fact. Firstly, Contributor entities are less prone to be shorten or modified, while due to their longness, Musical Work entities often represent only a part of the complete title of a musical piece. Furthermore, Musical Work titles are typically composed by more tokens, including common words which can be easily misclassified. The low performances obtained in the case of Musical Work entities can be a consequences of these observations. On the other hand, when referring to a Contributor users often use only the surname, but in most of the cases it is enough for the system to recognizing the entities.\nFrom the experiments we have seen that generally the biLSTM-CRF architecture outperforms the SVM model. The benefit of using the whole set of features is evident in the training part, but while testing the inclusion of the features not always leads to better results. In addition, some of the features designed in our experiments are tailored to the case of classical music, hence they might not be representative if applied to other fields. We do not exclude that our method can be adapted for detecting other kinds of entity, but it might be needed to redefine the features according to the case considered. Similarly, it has not been found a particular advantage of using the pre-trained embeddings instead of the one trained with our corpora. Furthermore, we verified the statistical significance of our experiment by using Wilcoxon Rank-Sum Test, obtaining that there have been not significant difference between the various model considered while testing.\nThe information extracted from the schedule also present several limitations. In fact, the hypothesis that a tweet is referring to a track broadcasted is not always verified. Even if it is common that radios listeners do comments about tracks played, or give suggestion to the radio host about what they would like to listen, it is also true that they might refer to a Contributor or Musical Work unrelated to the radio schedule.\n\nTable 2. Example of entities annotated and corresponding formal forms, from the user-generated tweet (1) in Table 1.\n\nTable 3. Examples of bot-generated tweets.\n\nTable 4. Tokens’ distributions within the two datasets: user-generated tweets (top) and bot-generated tweets (bottom)\n\nFig. 2. Example of the workflow for recognizing entities in UGC using the information from the radio schedule\n\nTable 6. F1 score for Contributor(C) and Musical Work(MW) entities recognized from bot-generated tweets (top) and user-generated tweets (bottom)\n\nTable 8. Precision (P), Recall (R) and F1 score for Contributor (C) and Musical Work (MW) entities recognized from user-generated tweets using the biLSTM-CRF network together with the schedule matching. The thresholds used for the matching are t=1200, w=0.5, c=0.5.\n\nQuestion: What are their results on the entity recognition task?",
"role": "user"
},
{
"content": "{\"answer\": [\"With both test sets performances decrease, varying between 94-97%\"], \"evidence\": [\"The performances of the NER experiments are reported separately for three different parts of the system proposed.\", \"Table 6 presents the comparison of the various methods while performing NER on the bot-generated corpora and the user-generated corpora. Results shown that, in the first case, in the training set the F1 score is always greater than 97%, with a maximum of 99.65%. With both test sets performances decrease, varying between 94-97%. In the case of UGC, comparing the F1 score we can observe how performances significantly decrease. It can be considered a natural consequence of the complex nature of the users\\u0027 informal language in comparison to the structured message created by the bot.\"]}",
"role": "assistant"
}
] |
science.mslr2022_ms2_multidoc_summarization
|
science.mslr2022_ms2_multidoc_summarization.1729
|
[
{
"content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nThe UK Resilience Programme: a school-based universal nonrandomized pragmatic controlled trial.\nAbstract 1:\nOBJECTIVE The study aim ed to assess the effectiveness of an 18-hr cognitive behavioral group intervention in reducing depressive symptoms ( and associated outcomes ) in a universal sample of students in mainstream schools in Engl and . The intervention , the UK Resilience Programme ( UKRP ) , was based on the Penn Resiliency Program for Children and Adolescents . METHOD Students ( N = 2,844 ; 49 % female ; 67 % White ) were ages 11 - 12 at 16 schools . Classes of students were assigned arbitrarily into intervention ( UKRP ) or control ( usual school provision ) conditions based on class timetables . Outcome measures were the Children 's Depression Inventory ( Kovacs , 1992 ) ( depressive symptoms , primary outcome ) ; Revised Children 's Manifest Anxiety Scale ( C. R. Reynolds & Richmond , 1985 ) ( anxiety ) ; and child-reported Goodman ( 1997 ) Strengths and Difficulties Question naire ( behavior ) . Students were surveyed at baseline , postintervention , 1-year follow-up , and 2-year follow-up . RESULTS At postintervention , UKRP students reported lower levels of depressive symptoms than control group students , but the effect was small ( d = 0.093 , 95 % CI [ -0.178 , -0.007 ] , p = .034 ) and did not persist to 1-year or 2-year follow-ups . There was no significant impact on symptoms of anxiety or behavior at any point . CONCLUSIONS UKRP produced small , short-term impacts on depression symptoms and did not reduce anxiety or behavioral problems . These findings suggest that interventions may produce reduced impacts when rolled out and taught by regular school staff . We discuss the implication s of these findings for policy and for future dissemination efforts\n\n\nTitle 2:\nClassroom-based cognitive behaviour therapy (FRIENDS): a cluster randomised controlled trial to Prevent Anxiety in Children through Education in Schools (PACES).\nAbstract 2:\nBACKGROUND Anxiety in children is common , impairs everyday functioning , and increases the risk of severe mental health disorders in adulthood . We investigated the effect of a classroom-based cognitive behaviour therapy prevention programme ( FRIENDS ) on anxiety symptoms in children . METHODS Preventing Anxiety in Children though Education in Schools ( PACES ) is a three-group parallel cluster r and omised controlled trial . Interventions were given between September , 2011 , and July , 2012 , with schools as the unit of allocation and individual participants as the unit of analysis . We enrolled state-funded junior schools in southwest Engl and . We sent information to all eligible schools ( state-funded junior schools in southwest Engl and ) inviting them to enrol in the study . School year groups were assigned by computer-generated r and omisation ( 1:1:1 ) to receive either school-led FRIENDS ( led by teacher or school staff member ) , health-led FRIENDS ( led by two trained health facilitators ) , or usual school provision . Children were not masked to treatment allocation . The allocated programme was given to all students ( aged 9 - 10 years ) in the school year ( ie , universal delivery ) as part of the school curriculum as nine , 60 min weekly sessions . Outcomes were collected by self-completed question naire administered by research ers masked to allocation . Primary outcome was symptoms of anxiety and low mood at 12 months assessed by the Revised Child Anxiety and Depression Scale ( RCADS 30 ) . Analyses were intention to treat and accounted for the clustered nature of the design . The study is registered , number IS RCT N23563048 . FINDINGS 45 schools were enrolled : 14 ( n=497 children ) were r and omly assigned to school-led FRIENDS , 14 ( n=509 ) to health-led FRIENDS , and 12 ( n=442 ) to usual school provision . 1257 ( 92 % ) children completed 12 month assessment s ( 449 in health-led FRIENDS , 436 in school-led FRIENDS , and 372 in usual school provision ) . We recorded a difference at 12 months in adjusted mean child-reported RCADS scores for health-led versus school-led FRIENDS ( 19·49 [ SD 14·81 ] vs 22·86 [ 15·24 ] ; adjusted difference -3·91 , 95 % CI -6·48 to -1·35 ; p=0·0004 ) and health-led FRIENDS versus usual school provision ( 19·49 [ 14·81 ] vs 22·48 [ 15·74 ] ; -2·66 , -5·22 to -0·09 ; p=0·043 ) . We noted no differences in parent or teacher ratings . Training teachers to deliver mental health programmes was not as effective as delivery by health professionals . INTERPRETATION Universally delivered anxiety prevention programmes can be effective when used in schools . However , programme effectiveness varies depending on who delivers them . FUNDING National Institute for Health Research Public Health Research Programme\n\n\nTitle 3:\nImpact of contact on adolescents’ mental health literacy and stigma: the SchoolSpace cluster randomised controlled trial\nAbstract 3:\nObjectives To investigate whether intergroup contact in addition to education is more effective than education alone in reducing stigma of mental illness in adolescents . Design A pragmatic cluster r and omised controlled trial compared education alone with education plus contact . Blocking was used to r and omly stratify classes within schools to condition . R and om allocation was concealed , generated by a computer algorithm , and undertaken after pretest . Data was collected at pretest and 2-week follow-up . Analyses use an intention-to-treat basis . Setting Secondary schools in Birmingham , UK . Participants The parents and guardians of all students in year 8 ( age 12–13 years ) were approached to take part . Interventions A 1-day educational programme in each school led by mental health professional staff . Students in the ‘ contact ’ condition received an interactive session with a young person with lived experience of mental illness . Outcomes The primary outcome was students ’ attitudinal stigma of mental illness . Secondary outcomes included knowledge-based stigma , mental health literacy , emotional well-being and resilience , and help-seeking attitudes . Results Participants were recruited between 1 May 2011 and 30 April 2012 . 769 participants completed the pretest and were r and omised to condition . 657 ( 85 % ) provided follow-up data . At 2-week follow-up , attitudinal stigma improved in both conditions with no significant effect of condition ( 95 % CI −0.40 to 0.22 , p=0.5 , d=0.01 ) . Significant improvements were found in the education-alone condition compared with the contact and education condition for the secondary outcomes of knowledge-based stigma , mental health literacy , emotional well-being and resilience , and help-seeking attitudes . Conclusions Contact was found to reduce the impact of the intervention for a number of outcomes . Caution is advised before employing intergroup contact with younger student age groups . The education intervention appeared to be successful in reducing stigma , promoting mental health knowledge , and increasing mental health literacy , as well as improving emotional well-being and resilience . A larger trial is needed to confirm these results . Trial registration number IS RCT N07406026 ; Results\n\n\nTitle 4:\nEffectiveness of the Mindfulness in Schools Programme: non-randomised controlled feasibility study.\nAbstract 4:\nBACKGROUND Mindfulness-based approaches for adults are effective at enhancing mental health , but few controlled trials have evaluated their effectiveness among young people . AIMS To assess the acceptability and efficacy of a schools-based universal mindfulness intervention to enhance mental health and well-being . METHOD A total of 522 young people aged 12 - 16 in 12 secondary schools either participated in the Mindfulness in Schools Programme ( intervention ) or took part in the usual school curriculum ( control ) . RESULTS Rates of acceptability were high . Relative to the controls , and after adjusting for baseline imbalances , children who participated in the intervention reported fewer depressive symptoms post-treatment ( P = 0.004 ) and at follow-up ( P = 0.005 ) and lower stress ( P = 0.05 ) and greater well-being ( P = 0.05 ) at follow-up . The degree to which students in the intervention group practised the mindfulness skills was associated with better well-being ( P<0.001 ) and less stress ( P = 0.03 ) at 3-month follow-up . CONCLUSIONS The findings provide promising evidence of the programme 's acceptability and efficacy\n\n\nTitle 5:\nExamining reward-seeking, negative self-beliefs and over-general autobiographical memory as mechanisms of change in classroom prevention programs for adolescent depression\nAbstract 5:\nBackground Effective methods to prevent adolescent depressive symptoms could reduce suffering and burden across the lifespan . However , psychological interventions delivered to adolescents show efficacy only in symptomatic or high-risk youth . Targeting causal risk factors and assessing mechanistic change can help devise efficacious universal or classroom based prevention programs . Methods A non-r and omized longitudinal design was used to compare three classroom-based prevention programs for adolescent depression ( Behavioral Activation with Reward Processing , “ Thinking about Reward in Young People ” ( TRY ) ; Cognitive Behavioral Therapy ( CBT ) and Mindfulness Based Cognitive Therapy ( MBCT ) ) , and determine cognitive mechanisms of change in these programs . Cognitive mechanisms examined were reward-seeking , negative self-beliefs ( assessed with behavioral tasks ) and over-general autobiographical memory . 256 healthy adolescents aged 13–14 participated with 236 ( 92 % ) and 227 ( 89 % ) completing the pre- and post- assessment s. Results TRY was the only intervention associated with a reduction in depressive symptoms at follow-up . Reward-seeking increased following TRY . In the other programs there were non-significant changes in cognitive mechanisms , with more reflective negative self-beliefs in CBT and fewer over-general autobiographical memories in MBCT In the TRY program , which focused on increasing sensitivity to rewarding activities , reward seeking increased and this was associated with decreased depressive symptoms . Limitations Due to the infeasibility of a cluster r and omized controlled trial , a non-r and omized design was used . Conclusions Increased reward-seeking was associated with decreased depressive symptoms and may be a mechanism of depressive symptom change in the intervention with a focus on enhancing sensitivity and awareness of reward . This study provides preliminary evidence to suggest that incorporating activities to enhance reward sensitivity may be fruitful in r and omized controlled trials of universal prevention programs for depression\n\n\nTitle 6:\nThe feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions.\nAbstract 6:\nOBJECTIVE : To test the feasibility of creating a valid and reliable checklist with the following features : appropriate for assessing both r and omised and non-r and omised studies ; provision of both an overall score for study quality and a profile of scores not only for the quality of reporting , internal validity ( bias and confounding ) and power , but also for external validity . DESIGN : A pilot version was first developed , based on epidemiological principles , review s , and existing checklists for r and omised studies . Face and content validity were assessed by three experienced review ers and reliability was determined using two raters assessing 10 r and omised and 10 non-r and omised studies . Using different raters , the checklist was revised and tested for internal consistency ( Kuder-Richardson 20 ) , test-retest and inter-rater reliability ( Spearman correlation coefficient and sign rank test ; kappa statistics ) , criterion validity , and respondent burden . MAIN RESULTS : The performance of the checklist improved considerably after revision of a pilot version . The Quality Index had high internal consistency ( KR-20 : 0.89 ) as did the subscales apart from external validity ( KR-20 : 0.54 ) . Test-retest ( r 0.88 ) and inter-rater ( r 0.75 ) reliability of the Quality Index were good . Reliability of the subscales varied from good ( bias ) to poor ( external validity ) . The Quality Index correlated highly with an existing , established instrument for assessing r and omised studies ( r 0.90 ) . There was little difference between its performance with non-r and omised and with r and omised studies . Raters took about 20 minutes to assess each paper ( range 10 to 45 minutes ) . CONCLUSIONS : This study has shown that it is feasible to develop a checklist that can be used to assess the method ological quality not only of r and omised controlled trials but also non-r and omised studies . It has also shown that it is possible to produce a checklist that provides a profile of the paper , alerting review ers to its particular method ological strengths and weaknesses . Further work is required to improve the checklist and the training of raters in the assessment of external validity\n\n\nTitle 7:\nChild anxiety treatment: outcomes in adolescence and impact on substance use and depression at 7.4-year follow-up.\nAbstract 7:\nResearch suggests that the sequelae of childhood anxiety disorders , if left untreated , can include chronic anxiety , depression , and substance abuse . The current study evaluated the maintenance of outcomes of children who received a 16-week cognitive-behavioral treatment for primary anxiety disorders ( generalized , separation , and social anxiety disorders ) an average of 7.4 years earlier . The 86 participants ( ages 15 to 22 years ; 91 % of the original sample ) and their parents completed diagnostic interviews and self- and parent-report measures . According to the diagnostic interviews , a meaningful percentage of participants maintained significant improvements in anxiety at long-term follow-up . With regard to sequelae , positive responders to anxiety treatment , as compared with less positive responders , had a reduced amount of substance use involvement and related problems at long-term follow-up . The findings are discussed with regard to child anxiety and some of its sequelae",
"role": "user"
},
{
"content": "1 RESULTS : Effectiveness of school-based universal interventions was found to be neutral or small with more positive effects found for poorer quality studies and those based in primary schools ( pupils aged 9 - 12 years ) .\nOverall , results suggested a trend whereby higher quality studies reported less positive effects .\nThe current evidence suggests there are neutral to small effects of universal , school-based interventions in the UK that aim to promote emotional or mental well-being or the prevention of mental health difficulties .",
"role": "assistant"
}
] |
science.genia_ner
|
science.genia_ner.1086
|
[
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"RNA\", \"CellLine\", \"Protein\", \"DNA\", \"CellType\"].\n\nPlease return the output as a JSON object of the format: {\"DNA\" : [\"TAL1\", ...], \"RNA\": [\"mRNAs\", ...], \"CellLine\": [\"T cells\", ...], \"CellType\": [\"monocytes\", ...], \"Protein\": [\"IL-2\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nComparison of the transactivation domains of Stat5 and Stat6 in lymphoid cells and mammary epithelial cells. Stat (signal transducers and activators of transcription) and Jak (Janus kinases) proteins are central components in the signal transduction events in hematopoietic and epithelial cells. They are rapidly activated by various cytokines, hormones, and growth factors. Upon ligand binding and cytokine receptor dimerization, Stat proteins are phosphorylated on tyrosine residues by Jak kinases. Activated Stat proteins form homo-or heterodimers, translocate to the nucleus, and induce transcription from responsive genes. Stat5 and Stat6 are transcription factors active in mammary epithelial cells and immune cells. Prolactin activates Stat5, and interleukin-4 (IL-4) activates Stat6. Both cytokines are able to stimulate cell proliferation, differentiation, and survival. We investigated the transactivation potential of Stat6 and found that it is not restricted to lymphocytes. IL-4-dependent activation of Stat6 was also observed in HC11 mammary epithelial cells. In these cells, Stat6 activation led to the induction of the beta-casein gene promoter. The induction of this promoter was confirmed in COS7 cells. The glucocorticoid receptor was able to further enhance IL-4-induced gene transcription through the action of Stat6. Deletion analysis of the carboxyl-terminal region of Stat6 and recombination of this region with a heterologous DNA binding domain allowed the delimitation and characterization of the transactivation domain of Stat6. The potencies of the transactivation domains of Stat5, Stat6, and viral protein VP16 were compared. Stat6 had a transactivation domain which was about 10-fold stronger than that of Stat5. In pre-B cells (Ba / F3), the transactivation domain of Stat6 was IL-4 regulated, independently from its DNA binding function.",
"role": "user"
},
{
"content": "{\"CellLine\": [\"mammary epithelial cells\", \"COS7 cells\", \"these cells\", \"HC11 mammary epithelial cells\"], \"CellType\": [\"pre - B cells\", \"mammary epithelial cells\", \"hematopoietic and epithelial cells\", \"lymphoid cells\", \"epithelial cells\"], \"DNA\": [\"beta - casein gene promoter\"], \"Protein\": [\"signal transducers and activators of transcription\", \"IL - 4\", \"Stat6\", \"viral protein VP16\", \"Stat\", \"glucocorticoid receptor\", \"Jak\", \"interleukin - 4\", \"cytokine receptor\", \"Stat5\", \"transactivation domains\", \"carboxyl - terminal region\", \"Jak kinases\", \"Stat proteins\", \"cytokines\", \"heterologous DNA binding domain\", \"Janus kinases\", \"transcription factors\"], \"RNA\": []}",
"role": "assistant"
}
] |
science.mslr2022_ms2_multidoc_summarization
|
science.mslr2022_ms2_multidoc_summarization.5552
|
[
{
"content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nOxygen improves maximal exercise performance in interstitial lung disease.\nAbstract 1:\nWe examined whether arterial hypoxemia impairs incremental exercise performance in subjects with interstitial lung disease ( ILD ) . Seven subjects underwent two incremental exercise tests on a bicycle ergometer in r and om order ; one while breathing room air ( RA ) , and the other while breathing 60 % O2 . Maximal exercise performance was impaired in all subjects : maximal oxygen uptake ( peak VO2 ) was 56 + /- 4 % predicted ( + /- SEM ) ; and all subjects demonstrated significant arterial oxygen desaturation during exercise breathing RA ( mean 11 + /- 1 % ) . Breathing 60 % O2 during exercise result ed in a significant increase in peak VO2 ( RA : 1.32 + /- 0.05 L/min ; O2 : 1.58 + /- 0.08 L/min ; p < 0.05 ) , exercise duration ( RA : 390 + /- 21 s ; O2 : 458 + /- 24 s ; p < 0.01 ) and maximal work load ( RA : 112 + /- 6 watts ; O2 : 129 + /- 6 watts ; p < 0.005 ) . There was no significant difference in maximal minute ventilation ( VI ) achieved at the end of both tests . At matched ventilation ( 90 % peak VI from the RA test ) , respiratory frequency ( f ) was significantly higher ( RA : 33 + /- 2 breaths/min ; O2 : 35 + /- 2 breaths/min ; p < 0.05 ) , and tidal volume ( VT ) significantly lower ( RA : 1.72 + /- 0.15 L ; O2 : 1.64 + /- 0.12 ; p < 0.05 ) when subjects exercised breathing oxygen . We conclude that arterial hypoxemia significantly impairs incremental exercise performance in subjects with ILD , but that mechanisms other than arterial oxygen desaturation are responsible for the rapid , shallow breathing pattern these subjects adopt during exercise\n\n\nTitle 2:\nClinically important differences in the intensity of chronic refractory breathlessness.\nAbstract 2:\nCONTEXT Clinical ly important differences in chronic refractory breathlessness are ill defined but important in clinical practice and trial design . OBJECTIVES To estimate the clinical relevance of differences in breathlessness intensity using distribution and patient anchor methods . METHODS This was a retrospective data analysis from 213 data sets from four clinical trials for refractory breathlessness . Linear regression was used to explore the relationship between study effect size and change in breathlessness score ( 0 - 100 mm visual analogue scale ) and to estimate the change in score equivalent to small , moderate , and large effect sizes . Pooled individual blinded patient preference data from three r and omized controlled trials were analyzed . The difference between the mean change in Day 4 minus baseline scores between preferred and non-preferred arms was calculated . RESULTS There was a strong relationship between change in score and effect size ( P = 0.001 ; R(2 ) = 0.98 ) . Values for small , moderate , and large effects were -5.5 , -11.3 , and -18.2 mm . The participant preference change in score was -9 mm ( 95 % CI , -15.8 , -2.1 ) ( P = 0.008 ) . CONCLUSION This larger data set supports a clinical ly important difference of 10 mm . Studies should be powered to detect this difference\n\n\nTitle 3:\nEffects of oxygen on exercise-induced increase of pulmonary arterial pressure in idiopathic pulmonary fibrosis.\nAbstract 3:\nINTRODUCTION Idiopathic pulmonary fibrosis ( IPF ) is a severe disease with no known effective therapy . Patients with IPF may develop severe increase of pulmonary arterial pressure ( PAP ) on exercise , the mechanisms of which is not clearly identified . OBJECTIVES To determine whether oxygen may correct the increase of PAP developed during exercise in patients with IPF . PATIENTS AND METHODS We performed a prospect i ve study on patients with IPF and no hypoxaemia at rest . The absence of pulmonary hypertension ( PH ) at rest was confirmed by echocardiography ( systolic PAP < 35 mmHg ) . Eight patients underwent echocardiography during exercise in air and with oxygen ( to maintain saturation of at least 94 % ) . Right ventricle-right atrium gradient and cardiac output were measured at rest , after each increment and at peak . We then compared the echocardiographic results obtained for air and oxygen . RESULTS All patients developed significant increase of SPAP on exercise ( 73 + /- 14 mmHg in air ) . Oxygen did not significantly improve SPAP on exercise ( SPAP : 76 + /- 15 mmHg ) . Echocardiographic characteristics were similar between air and oxygen except for exercise tolerance in term of workload ( p=0.045 ) and endurance ( p=0.017 ) . Resting pulmonary function tests did not predict the occurrence of increase of PAP on exercise . CONCLUSION Our results demonstrate that oxygen does not improve exercise-induced increase of PAP in patients with IPF and support the hypothesis that hypoxic vaso-constriction is not the main mechanism of acute increase of PAP during exercise\n\n\nTitle 4:\nLONG TERM DOMICILIARY OXYGEN THERAPY IN CHRONIC HYPOXIC COR PULMONALE COMPLICATING CHRONIC BRONCHITIS AND EMPHYSEMA: Report of the Medical Research Council Working Party\nAbstract 4:\nA controlled trial of long term domiciliary oxygen therapy has been carried out in three centres in the U.K. The 87 patients , all under 70 years of age , who took part had chronic bronchitis or emphysema with irreversible airways obstruction , severe arterial hypoxaemia , carbon dioxide retention , and a history of congestive heart failure . The patients were r and omised to oxygen therapy ( treated ) or no oxygen ( controls ) . Oxygen was given by nasal prongs for at least 15 h daily , usually at 2 1/min . The two groups were well matched , both clinical ly and in terms of lung function and other laboratory findings . 19 of the 42 oxygen treated patients died in the five years of survival follow-up compared with 30 out of 45 controls : in the 66 men in this trial the survival advantage of oxygen did not emerge until 500 days had elapsed . Survival for the 12 female controls was surprisingly poor , 8 of them being dead at 3 years . Mortality was not easy to predict , though a summation of arterial carbon dioxide tension and red cell mass was helpful . Neither time spent in hospital because of exacerbations of respiratory failure nor work attendance were affected by oxygen therapy , but these patients were very ill at the start of the trial and many had already retired on grounds of age or ill-health . Physiological measurements suggested that oxygen did not slow the progress of respiratory failure in those who died early . However , in longer term survivors on oxygen , arterial oxygenation did seem to stop deterioration\n\n\nTitle 5:\nContinuous or nocturnal oxygen therapy in hypoxemic chronic obstructive lung disease: a clinical trial. Nocturnal Oxygen Therapy Trial Group.\nAbstract 5:\nAt six centers , 203 patients with hypoxemic chronic obstructive lung disease were r and omly allocated to either continuous oxygen ( O2 ) therapy or 12-hour nocturnal O2 therapy and followed for at least 12 months ( mean , 19.3 months ) . The two groups were initially well matched in terms of physiological and neuropsychological function . Compliance with each oxygen regimen was good . Overall mortality in the nocturnal O2 therapy group was 1.94 times that in the continuous O2 therapy group ( P = 0.01 ) . This trend was striking in patients with carbon dioxide retention and also present in patients with relatively poor lung function , low mean nocturnal oxygen saturation , more severe brain dysfunction , and prominent mood disturbances . Continuous O2 therapy also appeared to benefit patients with low mean pulmonary artery pressure and pulmonary vascular resistance and those with relatively well-preserved exercise capacity . We conclude that in hypoxemic chronic obstructive lung disease , continuous O2 therapy is associated with a lower mortality than is nocturnal O2 therapy . The reason for this difference is not clear\n\n\nTitle 6:\nSymptomatic benefit of supplemental oxygen in hypoxemic patients with chronic lung disease.\nAbstract 6:\nWe have compared the symptomatic benefit of air and oxygen at rest in hypoxemic patients with chronic obstructive airway disease ( COAD ) or interstitial lung disease ( ILD ) . A total of 12 severely disabled patients with COAD ( mean + /- SEM , PaO2 , 50.3 + /- 3.7 mm Hg ) and 10 with ILD ( PaO2 , 48.0 + /- 3.1 mm Hg ) received 28 % oxygen and air by Venturi face mask , each gas on two occasions , in a double-blind r and omized fashion . SaO2 increased ( p less than 0.01 ) in both groups during oxygen breathing : COAD , 85.1 + /- 2.3 % versus 93.1 + /- 1.4 % ; ILD , 85.5 + /- 1.7 % versus 94.7 + /- 0.9 % . The patients with COAD stated that air helped their breathing on 15 of 24 occasions and that oxygen helped on 22 of 24 occasions ( p less than 0.05 ) . In the patients with ILD the values were 6 of 20 and 13 of 20 occasions , respectively ( p less than 0.05 ) . In both groups of patients the severity of breathlessness recorded on a 100-mm visual analog scale was significantly ( p less than 0.05 ) lower during oxygen breathing : COAD , 29.6 + /- 4.5 versus 45.6 + /- 6.0 ; ILD , 30.2 + /- 5.1 versus 48.1 + /- 4.4 . Ventilation measured by magnetometers was significantly lower during oxygen breathing in the patients with COAD ( 8.2 + /- 1.0 versus 9.3 + /- 1.1 L/min ; p less than 0.05 ) , but the difference between oxygen and air in patients with ILD was not statistically significant ( 9.3 + /- 1.3 versus 11.2 + /- 1.6 L/min ; p greater than 0.05 ) . ( ABSTRACT TRUNCATED AT 250 WORDS\n\n\nTitle 7:\nEffect of ambulatory oxygen on exertional dyspnea in IPF patients without resting hypoxemia.\nAbstract 7:\nBACKGROUND AND OBJECTIVE The effects of ambulatory oxygen for idiopathic pulmonary fibrosis ( IPF ) patients without resting hypoxemia have not been eluci date d. The purpose of this study was to assess the effect of ambulatory oxygen on dyspnea in IPF patients without resting hypoxemia but with desaturation on exertion . METHODS This was a double-blind , placebo-controlled , r and omized crossover trial of ambulatory oxygen versus ambulatory air . Patients with IPF who had a partial pressure of arterial oxygen ( PaO2 ) between 60 mm Hg and 80 mm Hg at rest , and desaturation of 88 % or less in a room-air 6-min walk test were eligible . Patients underwent a st and ardized 6-min walk test and a 6-min free walk test under each ambulatory gas . Oxygen and air were provided at 4 L/min intranasally . Dyspnea was evaluated immediately , 1 , and 2 min after the tests . RESULTS Twenty patients ( 16 men ) , with a mean age of 73.5 ( SD 4.1 ) years , % predicted forced vital capacity ( FVC ) of 71.0 ( 13.3 ) % , % predicted diffusion capacity for carbon monoxide ( DLco ) of 57.0 ( 13.3 ) % , and PaO2 of 72.5 ( 5.4 ) mm Hg were recruited . No significant differences in dyspnea were observed between ambulatory oxygen and air at each time point . However , some patients showed improvement in dyspnea with oxygen on an individual basis . CONCLUSIONS Since oxygen provides no additional benefit over air in terms of exertional dyspnea for IPF patients without resting hypoxemia , routine prescription of ambulatory oxygen is not recommended . However , assessment on an individual basis is necessary . Trial registration . UMIN Clinical Trial Registry ; No.:UMIN000005098 ; URL :",
"role": "user"
},
{
"content": "This systematic review showed no effects of oxygen therapy on dyspnoea during exercise in ILD , although exercise capacity was increased .",
"role": "assistant"
}
] |
science.bioasq_general_qa
|
science.bioasq_general_qa.523
|
[
{
"content": "You will be shown a question, followed by excerpts from biomedical research papers. Please answer the question based on the provided context. Do not include any text in your response other than the answer.\n\nQuestion: What is the function of mTOR?\n\nContext: Protein synthesis regulation via mammalian target of rapamycin complex 1 (mTORC1) signaling pathway has key roles in neural development and function,\nmTOR regulates assembly of the translation initiation machinery by interacting with the eukaryotic initiation factor eIF3 complex and by controlling phosphorylation of key translational regulators\nMammalian target of rapamycin (mTOR) complexes, mTORC1 and mTORC2, are master regulators of cellular survival, growth and metabolism\n(mTOR) is a serine-threonine kinase that controls several important aspects of mammalian cell function. mTOR activity is modulated by various intra- and extracellular factors; in turn, mTOR changes rates of translation, transcription, protein degradation, cell signaling, metabolism, and cytoskeleton dynamics. mTOR has been repeatedly shown to participate in neuronal development and the proper functioning of mature neurons.\nmammalian target of rapamycin (mTOR), a member of the phosphoinositide 3-kinase related kinase (PIKK) family, plays a central role in the regulation of cell growth\n(mTOR) is a serine/threonine kinase and that forms two multiprotein complexes known as the mTOR complex 1 (mTORC1) and mTOR complex 2 (mTORC2). mTOR regulates cell growth, proliferation and survival\n(mTOR), a downstream effector of the phosphatidylinositol 3-kinase (PI3K)/Akt (protein kinase B) signaling pathway that mediates cell survival and proliferation,",
"role": "user"
},
{
"content": "The mTOR protein regulates assembly of the translation initiation machinery and are master regulators of cellular survival, growth and metabolism.",
"role": "assistant"
}
] |
science.mslr2022_ms2_multidoc_summarization
|
science.mslr2022_ms2_multidoc_summarization.11455
|
[
{
"content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nSize is everything – large amounts of information are needed to overcome random effects in estimating direction and magnitude of treatment effects\nAbstract 1:\nAbstract Variability in patients ' response to interventions in pain and other clinical setting s is large . Many explanations such as trial methods , environment or culture have been proposed , but this paper sets out to show that the main cause of the variability may be r and om chance , and that if trials are small their estimate of magnitude of effect may be incorrect , simply because of the r and om play of chance . This is highly relevant to the questions of ‘ How large do trials have to be for statistical accuracy ? ’ and ‘ How large do trials have to be for their results to be clinical ly valid ? ’ The true underlying control event rate ( CER ) and experimental event rate ( EER ) were determined from single‐dose acute pain analgesic trials in over 5000 patients . Trial group size required to obtain statistically significant and clinical ly relevant ( 0.95 probability of number‐needed‐to‐treat within ±0.5 of its true value ) results were computed using these values . Ten thous and trials using these CER and EER values were simulated using varying group sizes to investigate the variation due to r and om chance alone . Most common analgesics have EERs in the range 0.4–0.6 and CER of about 0.19 . With such efficacy , to have a 90 % chance of obtaining a statistically significant result in the correct direction requires group sizes in the range 30–60 . For clinical relevance nearly 500 patients are required in each group . Only with an extremely effective drug ( EER>0.8 ) will we be reasonably sure of obtaining a clinical ly relevant NNT with commonly used group sizes of around 40 patients per treatment arm . The simulated trials showed substantial variation in CER and EER , with the probability of obtaining the correct values improving as group size increased . We contend that much of the variability in control and experimental event rates is due to r and om chance alone . Single small trials are unlikely to be correct . If we want to be sure of getting correct ( clinical ly relevant ) results in clinical trials we must study more patients . Credible estimates of clinical efficacy are only likely to come from large trials or from pooling multiple trials of conventional ( small ) size\n\n\nTitle 2:\nDifluoromethylornithine Plus Sulindac for the Prevention of Sporadic Colorectal Adenomas: A Randomized Placebo-Controlled, Double-Blind Trial\nAbstract 2:\nAbstract Pre clinical studies of chemoprevention drugs given in combination at low doses show remarkable efficacy in preventing adenomas with little additional toxicities , suggesting a strategy to improve risk to benefit ratios for preventing recurrent adenomas . Three hundred seventy-five patients with history of resected ( ≥3 mm ) adenomas were r and omly assigned to receive oral difluoromethylornithine ( DFMO ) 500 mg and sulindac 150 mg once daily or matched placebos for 36 months , stratified by use of low-dose aspirin ( 81 mg ) at baseline and clinical site . Follow-up colonoscopy was done 3 years after r and omization or off- study . Colorectal adenoma recurrence was compared among the groups with log-binomial regression . Comparing the outcome in patients receiving placebos to those receiving active intervention , ( a ) the recurrence of one or more adenomas was 41.1 % and 12.3 % ( risk ratio , 0.30 ; 95 % confidence interval , 0.18 - 0.49 ; P < 0.001 ) ; ( b ) 8.5 % had one or more advanced adenomas , compared with 0.7 % of patients ( risk ratio , 0.085 ; 95 % confidence interval , 0.011 - 0.65 ; P < 0.001 ) ; and ( c ) 17 ( 13.2 % ) patients had multiple adenomas ( > 1 ) at the final colonoscopy , compared with 1 ( 0.7 % ; risk ratio , 0.055 ; 0.0074 - 0.41 ; P < 0.001 ) . Serious adverse events ( grade ≥3 ) occurred in 8.2 % of patients in the placebo group , compared with 11 % in the active intervention group ( P = 0.35 ) . There was no significant difference in the proportion of patients reporting hearing changes from baseline . Recurrent adenomatous polyps can be markedly reduced by a combination of low oral doses of DFMO and sulindac and with few side effects\n\n\nTitle 3:\nDeriving dichotomous outcome measures from continuous data in randomised controlled trials of analgesics: verification from independent data\nAbstract 3:\nAbstract A previously established relationship for deriving dichotomous from continuous information in r and omised controlled trials ( RCTs ) of analgesics has been tested using an independent data set . Individual patient information from 18 RCTs of parallel‐group design in acute postoperative pain ( after abdominal , gynaecological and oral surgery ) was used to calculate the percentage of the maximum possible pain relief score ( % maxTOTPAR ) and the proportion of patients with > 50%maxTOTPAR for the different treatments . The relationship between the measures was investigated in 85 treatments with over 3400 patients . In 80 of 85 treatments ( 94 % ) agreement between calculated and actual number of patients with > 50%maxTOTPAR was within four patients per treatment and in 72 ( 85 % ) was within three ( average of 40 patients per treatment , range 21–58 patients ) . Summing the positive and negative differences between actual and calculated numbers of patients with > 50%maxTOTPAR gave an average difference of 0.30 patients per treatment arm . Reports of RCTs of analgesics frequently describe results of studies in the form of mean derived indices , rather than using discontinuous events , such as number or proportion of patients with 50 % pain relief . Because mean data inadequately describe information with a non‐normal distribution , combining mean data in systematic review s may compromise the results . Showing that dichotomous data can reliably be derived from mean data in acute pain studies enables data published as means to be used for quantitative systematic review s which require data in dichotomous form\n\n\nTitle 4:\nDeriving dichotomous outcome measures from continuous data in randomised controlled trials of analgesics\nAbstract 4:\n& NA ; Reports of RCTs of analgesics frequently describe results of studies in the form of mean derived indices , rather than using discontinuous events — such as number or proportion of patients with 50 % pain relief . Because mean data inadequately describe information with a non‐normal distribution , combining mean data in systematic review s may compromise the results . Showing that dichotomous data can reliably be derived from mean data , at least in acute pain models , indicates that more meaningful overviews or meta‐ analysis may be possible . This study investigated the relationship between continuous and dichotomous analgesic measures in a set of individual patient data , and then used that relationship to derive dichotomous from continuous information in r and omised controlled trials ( RCTs ) of analgesics . Individual patient information from 13 RCTs of parallel‐group and crossover design in acute postoperative pain was used to calculate the percentage of the maximum possible pain relief score ( % maxTOTPAR ) and the proportion of patients with greater than 50 % pain relief ( > 50%maxTOTPAR ) for the different treatments . The relationship between the measures was investigated in 45 actual treatments and 10 000 treatments simulated using the underlying actual distribution ; 1283 patients had 45 separate treatments . Mean % maxTOTPAR correlated with the proportion of patients with > 50%maxTOTPAR ( r2 = 0.90 ) . The relationship calculated from all the 45 treatments predicted to within three patients the number of patients with more than 50 % pain relief in 42 of 45 treatments , and 98.8 % of 10 000 simulated treatments . For seven effective treatments , actual numbers‐needed‐to‐treat ( NNT ) to achieve > 50%maxTOTPAR compared with placebo were very similar to those derived from calculated data\n\n\nTitle 5:\nA double-blind randomized study of fetal side effects during and after the short-term maternal administration of indomethacin, sulindac, and nimesulide for the treatment of preterm labor.\nAbstract 5:\nOBJECTIVE The purpose of this study was to establish whether nimesulide causes fewer fetal side effects than indomethacin or sulindac after short-term maternal exposure for tocolysis . STUDY DESIGN This was a double-blind , double-dummy prospect i ve r and omized study with three drug treatment groups ( n = 10 per group ) that were comprised of subjects who were at 28 to 32 weeks of gestation with preterm contractions . The subjects were treated in the delivery suites of two busy inner-city teaching hospitals ; the intervention consisted of 48 hours of treatment and with 72 hours of follow-up observation with indomethacin 100 mg ( twice daily ) , sulindac 200 mg ( twice daily ) , or nimesulide 200 mg ( twice daily ) . The amniotic fluid index , hourly fetal urine production , and ductal Doppler pulsatility index observations were monitored before the treatment and at 4 , 24 , 48 , 72 , and 120 hours after the treatment was started . The statistical analysis used repeated measures analysis of variance , Bonferroni test , and Bl and -Altman agreement . Significance assumed when the probability value was < .05 . RESULTS Each drug caused a significant reduction in all three observations over the 48-hour treatment period , which recovered to pretreatment levels by 72 hours after treatment . There were no significant differences among drugs for any of these effects . CONCLUSION Nimesulide causes similar short-term fetal side effects to indomethacin and sulindac\n\n\nTitle 6:\nThe effect of long-term treatment with sulindac on the progression of diabetic retinopathy*\nAbstract 6:\nABSTRACT Objective : To evaluate the effects of long-term treatment with sulindac on the progression of diabetic retinopathy ( DR ) . Research design and methods : 40 Japanese patients with type 2 diabetes were enrolled in a r and omized , single-blind controlled trial in which the effects of sulindac ( 200 mg/day , 100 mg twice a day ; n = 16 patients ) on the progression of DR were compared to controls ( 24 patients ) for 3 years . All patients were comparable in their age , gender , duration of disease , body mass index , dipstick proteinuria , insulin therapy , glycemic control , and clinical stages of DR . Outcome was determined by comparing parameters of retinopathy in fundus photographs that were taken at time 0 to those taken 1 , 2 , and 3 years after the initiation of treatment . Results : Patients in the sulindac group did not develop DR during the course of treatment nor was there progression of pathology in those who began the study with mild non-proliferative DR ( NPDR ) . On the other h and , six patients progressed to mild NPDR in the control group – three at year 1 and three at year 3 – and an additional patient progressed to severe NPDR from mild NPDR by year 1 and to proliferative DR by year 2 . The findings at year 3 in the sulindac group were significantly ( p < 0.05 ) different from the control group . None of the patients experienced any adverse effects of treatment . Conclusions : Long-term treatment with sulindac was clinical ly effective in decreasing the progression of mild DR in type 2 diabetic patients in this pilot study\n\n\nTitle 7:\nDeveloping a database of published reports of randomised clinical trials in pain research \nAbstract 7:\n& NA ; A data base of r and omised clinical trials ( RCTs ) in pain research published from 1950 to 1990 was created following an extensive literature search . By applying a refined MEDLINE search strategy from 1966 to 1990 and by h and ‐ search ing more than 1 000 000 pages of a total of 40 biomedical journals published during the period 1950–1990 , more than 8000 RCTs were identified . The RCTs were published in more than 800 journals and over 85 % appeared between 1976 and 1990 . If the trend of the last 15 years persists , a total of more than 15 000 RCTs will be published in pain relief by the year 2000 . A detailed description of methods to ensure efficient use of re sources during the identification , retrieval and management of the information in pain relief and other fields is given . Emphasis is made on the importance of refining MEDLINE search strategies , on the use of volunteers to h and ‐ search journals and on careful monitoring of each of the steps of the process . The potential uses of the data base to guide clinical and research decisions are discussed\n\n\nTitle 8:\nThe visual analogue pain intensity scale: what is moderate pain in millimetres?\nAbstract 8:\nAbstract One way to ensure adequate sensitivity for analgesic trials is to test the intervention on patients who have established pain of moderate to severe intensity . The usual criterion is at least moderate pain on a categorical pain intensity scale . When visual analogue scales ( VAS ) are the only pain measure in trials we need to know what point on a VAS represents moderate pain , so that these trials can be included in meta‐ analysis when baseline pain of at least moderate intensity is an inclusion criterion . To investigate this we used individual patient data from 1080 patients from r and omised controlled trials of various analgesics . Baseline pain was measured using a 4‐point categorical pain intensity scale and a pain intensity VAS under identical conditions . The distribution of the VAS scores was examined for 736 patients reporting moderate pain and for 344 reporting severe pain . The VAS scores corresponding to moderate or severe pain were also examined by gender . Baseline VAS scores recorded by patients reporting moderate pain were significantly different from those of patients reporting severe pain . Of the patients reporting moderate pain 85 % scored over 30 mm on the corresponding VAS , with a mean score of 49 mm . For those reporting severe pain 85 % scored over 54 mm with a mean score of 75 mm . There was no difference between the corresponding VAS scores of men and women . Our results indicate that if a patient records a baseline VAS score in excess of 30 mm they would probably have recorded at least moderate pain on a 4‐point categorical scale\n\n\nTitle 9:\nA comparison of ketoprofen SR and sulindac in the elderly with rheumatoid arthritis.\nAbstract 9:\nThe elderly ( age > 65 years ) are more vulnerable to side-effects induced by non-steroidal anti-inflammatory drugs ( NSAIDs ) . We therefore performed a double-blind comparative study of ketoprofen SR and sulindac in patients with active rheumatoid arthritis , 65 years of age or older . Sulindac was chosen because of its possible renal sparing effects , and ketoprofen SR because of its short half life and sustained release delivery system . Eighty patients were entered . More patients withdrew from the study due to side-effects in the sulindac group ; both treatment groups had a high incidence of side-effects during this study and during previous exposure to other NSAIDs , demonstrating that the elderly are susceptible to side-effects from NSAIDs",
"role": "user"
},
{
"content": "AUTHORS ' CONCLUSIONS In the absence of evidence of efficacy , at present , for oral sulindac in acute postoperative pain , its use in this indication is not justified .",
"role": "assistant"
}
] |
science.data_reco_mcq_sc
|
science.data_reco_mcq_sc.752
|
[
{
"content": "Given a short query with keyphrases about a research idea in 2019 in the domain of artificial intelligence, recommend the relevant datasets to validate the research idea from options provided. A short description of the datasets is also provided to help you.\n\nQuery: I want to find a method for object classification from point clouds.\n\nKeyphrases: 3d object classification point clouds\n\nDataset description:\nScanObjectNN: ScanObjectNN is a newly published real-world dataset comprising of 2902 3D objects in 15 categories. It is a challenging point cloud classification datasets due to the background, missing parts and deformations.\nCompletion3D: The Completion3D benchmark is a dataset for evaluating state-of-the-art 3D Object Point Cloud Completion methods. Ggiven a partial 3D object point cloud the goal is to infer a complete 3D point cloud for the object.\nSNIPS: The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity: SearchCreativeWork (e.g. Find me the I, Robot television show), GetWeather (e.g.\nShapeNet: ShapeNet is a large scale repository for 3D CAD models developed by researchers from Stanford University, Princeton University and the Toyota Technological Institute at Chicago, USA. The repository contains over 300M models with 220,000 classified into 3,135 classes arranged using WordNet hypernym-hyponym relationships.\n\nOptions:- A: ScanObjectNN B: Completion3D C: SNIPS D: ShapeNet",
"role": "user"
},
{
"content": "ShapeNet",
"role": "assistant"
}
] |
science.mslr2022_ms2_multidoc_summarization
|
science.mslr2022_ms2_multidoc_summarization.4413
|
[
{
"content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nNondestructive, in vitro quantification of crown margins.\nAbstract 1:\nSTATEMENT OF PROBLEM It is important that artificial crowns fit the prepared tooth accurately , as marginal deficiencies are predisposed to plaque accumulation and lead to increased risk of periodontal disease . Various methods of evaluation for marginal fit are described in the literature , but most approaches are limited by destructive methods of assessment and /or small points of measurement . PURPOSE This study compared , in vitro , the marginal fit of 4 types of complete crowns on human premolar teeth with the use of nondestructive profilometry . This method determined whether fit was influenced by type of crown or surface morphology of the tooth , namely , grooved or ungrooved surfaces . MATERIAL AND METHODS Four groups of specimens were prepared for complete crowns : group BA , bonding alloy with chamfer finish line ; group G , gold alloy with chamfer finish line ; group PC , porcelain with a chamfer finish line ; and group PS , porcelain with a shoulder finish line . Two profiles of grooved mesial and ungrooved distal surfaces of the teeth were performed : ( 1 ) teeth prepared for each type of crown and ( 2 ) teeth with crowns seated but not cemented . Marginal fit ( absolute marginal discrepancy ) from the finish line edge of the tooth preparations to crown edges ( CE ) and leading edges ( LE ) of crowns were measured . RESULTS A 2-way analysis of variance for crown type and tooth surface morphology revealed significant differences between crown types for all measurement parameters , except vertical LE . The effect of surface morphology was not significant , except for vertical LE ( P<.05 ) . For all parameters , except vertical LE , the ranking of marginal fit discrepancies from greatest to least was as follows : group PC , G , BA , and PS . For vertical LE distances , the ranking was PS , BA , G , and PC ( P<.05 ) . CONCLUSION Profilometry was used as a nondestructive , accurate method of evaluating the absolute marginal fit of different types of crowns . Marginal fits varied continuously around the circumference of each crown and made clinical assessment of fit accuracy subjective and arduous\n\n\nTitle 2:\nMisfit and microleakage of implant-supported crown copings obtained by laser sintering and casting techniques, luted with glass-ionomer, resin cements and acrylic/urethane-based agents.\nAbstract 2:\nOBJECTIVES This study evaluated the marginal misfit and microleakage of cement-retained implant-supported crown copings . METHODS Single crown structures were constructed with : ( 1 ) laser-sintered Co-Cr ( LS ) ; ( 2 ) vacuum-cast Co-Cr ( CC ) and ( 3 ) vacuum-cast Ni-Cr-Ti ( CN ) . Sample s of each alloy group were r and omly luted in st and ard fashion onto machined titanium abutments using : ( 1 ) GC Fuji PLUS ( FP ) ; ( 2 ) Clearfil Esthetic Cement ( CEC ) ; ( 3 ) RelyX Unicem 2 Automix ( RXU ) and ( 4 ) DentoTemp ( DT ) ( n=15 each ) . After 60 days of water ageing , vertical discrepancy was SEM-measured and cement microleakage was scored using a digital microscope . Misfit data were subjected to two-way ANOVA and Student-Newman-Keuls multiple comparisons tests . Kruskal-Wallis and Dunn 's tests were run for microleakage analysis ( α=0.05 ) . RESULTS Regardless of the cement type , LS sample s exhibited the best fit , whilst CC and CN performed equally well . Despite the framework alloy and manufacturing technique , FP and DT provide comparably better fit and greater microleakage scores than did CEC and RXU , which showed no differences . CONCLUSIONS DMLS of Co-Cr may be a reliable alternative to the casting of base metal alloys to obtain well-fitted implant-supported crowns , although all the groups tested were within the clinical ly acceptable range of vertical discrepancy . No strong correlations were found between misfit and microleakage . Notwithst and ing the framework alloy , definitive resin-modified glass-ionomer ( FP ) and temporary acrylic/urethane-based ( DT ) cements demonstrated comparably better marginal fit and greater microleakage scores than did 10-methacryloxydecyl-dihydrogen phosphate-based ( CEC ) and self-adhesive ( RXU ) dual-cure resin agents\n\n\nTitle 3:\nEffect of in vivo crown margin discrepancies on periodontal health.\nAbstract 3:\nNumerous reports have suggested a relationship between marginal adaptation of dental castings and periodontal tissue health , and this study examined this relationship quantitatively . Forty-two crown restorations in 29 r and omly selected patients were selected for this study using three criteria . ( 1 ) The crowns were placed at the University of North Carolina School of Dentistry ; ( 2 ) the crowns were in service for a minimum of 4 years ; and ( 3 ) the crown margins were within the intracrevicular crevice ( subgingival ) . Replica impressions of the facial margins of specific crowns were made with a vinyl polysiloxane impression material , and poured casts were prepared for scanning electron micrograph evaluation . Marginal discrepancy measurements were identified on each micrograph at 10 equally spaced locations along the margin and averaged for each specimen . Periodontal indices of pocket depths , crevicular fluid volume , and gingival index were accumulated for clinical measurements . Pearson correlation and Bonferroni adjusted probability tests were performed , but no significant correlation was found between marginal discrepancy ( 0.16 + /- 0.13 mm ) and pocket depth ( 2.4 + /- 0.9 mm ) . However , a strong correlation ( p less than 0.001 ) existed between marginal discrepancy and gingival index ( 2 + /- 0.8 ) and between marginal discrepancies and crevicular fluid volume ( 49.9 + /- 31.1 ) . These results established that a significant quantitative relationship existed between the marginal discrepancy and periodontal tissue inflammation for subgingivally located crown margins\n\n\nTitle 4:\nMarginal adaptation of fixed prosthodontics: a new in vitro 360-degree external examination procedure.\nAbstract 4:\nPURPOSE This study evaluated the marginal fit of experimental and custom-made fixed prosthetic restorations through a new 360-degree external examination . The minimum number of gap measurements required to produce relevant results for gap analysis was also investigated . MATERIAL S AND METHODS The marginal fit of six experimental and eight custom-made crowns was observed microscopically by means of a mechanical device , and software was employed to measure the gap . Two crowns , chosen from among the 14 previously evaluated , were reanalyzed to determine the minimum number of gap measurements required to produce significant results for gap analysis . RESULTS The precision obtained with the custom-made crowns differed from that of the experimental specimens . The minimum number of measurements required to produce a sample mean value within + /- 5 microm of the mean , calculated over 360 measurements , taking st and ard error of the means < or = 4 microm , was 18 for experimental and 90 for custom-made crowns , for both equidistant measurement spacing and r and omly selected points . CONCLUSION Differences in fit between experimental specimens and custom-made ones showed that experimental results might not always be obtained in clinical practice . Within the limitations of the protocol of this study , the minimum number of measurements required to ensure relevant results for gap analysis was 18 for experimental and 90 for custom-made crowns\n\n\nTitle 5:\nEvaluation of the marginal precision of one-piece complete arch titanium frameworks fabricated using five different methods for implant-supported restorations.\nAbstract 5:\nOBJECTIVE The aim of the present work was to compare the marginal precision of titanium frameworks for a complete arch-fixed prosthesis fabricated using five different methods . METHODS A prospect i ve study was design ed . Fifteen titanium frameworks for totally edentulous upper or lower jaws , each supported by five to nine implants , were assigned to five study groups , so as to have three frameworks in each group for each technique : ( 1 ) lost wax technique frameworks , ( 2 ) cast titanium sovrastructures laser welded to prefabricated titanium copings , ( 3 ) Procera Implant Bridge , ( 4 ) Cresco Ti System and ( 5 ) CAM StructSURE Precision Milled Bar . The microgap between the framework and the shoulders of implant analogues was measured on the master cast with a stereomicroscope at a magnification of 100 x at four different locations , yielding a total of 364 data points on 91 implants . Data were analyzed using an ANOVA and a Tukey post hoc test ( P=<0.05 ) . RESULTS The mean values for the microgap were 78 microm ( SD+/-48 ) for lost wax technique frameworks , 33 microm ( SD+/-19 ) for cast titanium sovrastructures laser welded to titanium copyings , 21 microm ( SD+/-10 ) for the Procera implant bridge , 18 microm ( SD+/-8 ) for the Cresco Ti System and 27 microm ( SD+/-15 ) for the CAM StructSURE . The differences among the mean values were statistically significant ( P<0.01 or P<0.05 ) . The comparisons among groups 3 , 4 , and 5 and between groups 2 and 5 were not significant ( P>0.05 ) . CONCLUSION The computer-aided procedures analyzed in the present study were able to produce a precision-fitting framework , with no significant differences among them and , at the same time , showed a greater precision compared with the traditional casting methods or with the use of prefabricated titanium copings . However , it should be noted that , even if group 2 frameworks were not as accurate as groups 3 and 4 , cast titanium sovrastructures laser welded to prefabricated titanium copings showed significantly better marginal precision than the frameworks produced with the lost wax technique\n\n\nTitle 6:\nClinical Marginal and Internal Adaptation of Maxillary Anterior Single All-Ceramic Crowns and 2-year Randomized Controlled Clinical Trial.\nAbstract 6:\nPURPOSE The aims of this r and omized-controlled clinical trial were to compare marginal and internal adaptation of all-ceramic crowns fabricated with CAD/CAM and heat-pressed ( HP ) techniques before luting and to evaluate the clinical outcomes at baseline and at 6 , 12 , and 24 months after luting . MATERIAL S AND METHODS Fifteen CAD/CAM ( CC ) and 15 HP all-ceramic crowns were placed in 15 patients . A silicone replica was obtained to measure marginal and internal adaptation of each all-ceramic crown before luting , and they were sectioned buccolingually and mesiodistally . Marginal and internal adaptations were measured using computerized light microscope at 40 × magnification . Clinical evaluations took place at baseline ( 2 days after luting ) and at 6 , 12 , and 24 months after luting . Replica scores were analyzed with Mann-Whitney U and Student 's t-test ( α = 0.05 ) . Survival rate of crowns was determined using Kaplan-Meier statistical analysis . RESULTS The median marginal gap for the CC group was 132.2 μm and was 130.2 μm for the HP group . The mean internal adaptation for the CC group was 220.3 ± 51.3 μm and 210.5 ± 31 μm for the HP group . There were no statistically significant differences with respect to marginal opening ( Mann-Whitney U test ; p = 0.95 ) and internal adaptation ( Student 's t-test ; p = 0.535 ) between the 2 groups . Based on modified Ryge criteria , 100 % of the crowns were rated satisfactory during the 2-year period . CONCLUSION In this in vivo study , CAD/CAM and HP all-ceramic crowns exhibited similar marginal and internal adaptations . A 100 % success rate was recorded for the 15 CAD/CAM and for the 15 HP all-ceramic crowns during the 2-year period\n\n\nTitle 7:\nMarginal fit of castable ceramic crowns.\nAbstract 7:\nThe objective of this study was to measure marginal fit of castable ceramic versus gold crowns . Full veneer gold and ceramic crowns were made on Ivorine dies . Crowns were cemented , embedded , sectioned faciolingually and mesiodistally , and photographed for measurement of absolute marginal discrepancies ( cavosurface angle to casting margin ) to evaluate fit . Results revealed no statistically significant differences in fit among four locations around the margins of either ceramic or gold crowns . There was no statistically significant difference in the combined absolute marginal discrepancy ( fit ) between ceramic and gold crowns . The variance of the combined absolute marginal discrepancy ( fit ) of the ceramic crowns was significantly different than that of the gold crowns ( p = 0.01 level ) . The st and ard deviation ( variability ) of the gold crowns was more than twice that of the ceramic crowns . R and omized block ANOVA demonstrated statistically significant differences among individual gold crown specimens , but none among individual ceramic crown specimens",
"role": "user"
},
{
"content": "Results : Of the 55 included comparative studies , 28 compared CAD‐CAM technology with conventional fabrication techniques , 12 contrasted CAD‐CAM technology and copy milling , 4 compared CAD‐CAM milling with direct metal laser sintering ( DMLS ) , and 22 investigated the performance of a CAD‐CAM system regarding marginal adaptation in restorations/infrastructures produced with different restorative material s. Conclusions : Most of the CAD‐CAM restorations/infrastructures were within the clinical ly acceptable marginal discrepancy ( MD ) range .\nThe performance of a CAD‐CAM system relative to marginal adaptation is influenced by the restorative material .\nCompared with CAD‐CAM , most of the heat‐pressed lithium disilicate crowns displayed equal or smaller MD values .\nSlip‐casting crowns exhibited similar or better marginal accuracy than those fabricated with CAD‐CAM .\nCobalt‐chromium and titanium implant infrastructures produced using a CAD‐CAM system elicited smaller MD values than zirconia .\nThe majority of cobalt‐chromium restorations/infrastructures produced by DMLS displayed better marginal accuracy than those fabricated with the casting technique .\nCompared with copy milling , the majority of zirconia restorations/infrastructures produced by CAD‐CAM milling exhibited better marginal adaptation .\nNo clear conclusions can be drawn about the superiority of CAD‐CAM milling over the casting technique and DMLS regarding marginal adaptation",
"role": "assistant"
}
] |
science.mslr2022_cochrane_multidoc_summarization
|
science.mslr2022_cochrane_multidoc_summarization.2871
|
[
{
"content": "You will be provided with the titles and abstracts of a collection of medical RCT reports studying a clinical question -- for instance, the efficacy of an intervention on some medical outcome. Please write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article. Your response should not include any other text. The RCT reports are below.\n\n\n\nTitle 1:\nAuthor contacts for retrieval of data for a meta-analysis on exercise and diet restriction.\nAbstract 1:\nThe mode of contact and response levels of authors who had been asked to provide missing or incomplete data for a systematic review on diet and exercise interventions for weight loss was examined.\n We contacted authors by electronic mail, letter, or both. Survival analyses were performed with the Kaplan-Meier method to determine differences in the proportion of responders over time among the different modes of contact and to determine whether response rates differed between authors from the United States and those from other countries. Logistic regression was used to determine whether the number of items requested and publication date influenced the likelihood of response.\n Two hundred forty-one (39.9 percent) studies had missing or incomplete data (e.g., sample size, age, caloric restriction, exercise amount, and so on). We were unable to locate ninety-five authors (39.4 percent). Of the remaining authors, forty-six authors (31.5 percent) responded to information requests. Time to respond differed by contact method (p < .05): e-mail (3 +/- 3 days), letter (27 +/- 30 days), and both (13 +/-12 days). Response rates from U.S. authors did not differ from those of other countries.\n Our study suggests poor success in the acquisition of essential information. Given considerable time and resources, weight loss studies require improved reporting standards to minimize the relatively unsuccessful attempt to contact authors for important and necessary information.",
"role": "user"
},
{
"content": "Those carrying out systematic reviews should continue to contact authors for missing data, recognising that this might not always be successful, particularly for older studies. Contacting authors by e-mail results in the greatest response rate with the fewest number of attempts and the shortest time to respond.",
"role": "assistant"
}
] |
science.ddi_ner
|
science.ddi_ner.568
|
[
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Brand\", \"UnapprovedDrug\", \"Drug\", \"DrugGroup\"].\n\nPlease return the output as a JSON object of the format: {\"Brand\" : [\"Aspirin\", ...], \"DrugGroup\": [\"Salicyclate\", ...], \"Drug\": [\"Phenobarbital\", ...], \"UnapprovedDrug\": [\"Heroin\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nThe drug interaction studies with valdecoxib were performed both with valdecoxib and a rapidly hydrolyzed intravenous prodrug form. The results from trials using the intravenous prodrug are reported in this section as they relate to the role of valdecoxib in drug interactions. General: In humans, valdecoxib metabolism is predominantly mediated via CYP 3A4 and 2C9 with glucuronidation being a further (20%) route of metabolism. In vitro studies indicate that valdecoxib is a moderate inhibitor of CYP 2C19 (IC50 = 6 g/mL or 19 M) and 2C9 (IC50 = 13 g/mL or 41 M), and a weak inhibitor of CYP 2D6 (IC50 = 31 g/mL or 100 M) and 3A4 (IC50 = 44 g/mL or 141 M ).. Aspirin: Concomitant administration of aspirin with valdecoxib may result in an increased risk of GI ulceration and complications compared to valdecoxib alone. Because of its lack of anti-platelet effect valdecoxib is not a substitute for aspirin for cardiovascular prophylaxis. In a parallel group drug interaction study comparing the intravenous prodrug form of valdecoxib at 40 mg BID (n=10) vs placebo (n=9), valdecoxib had no effect on in vitro aspirin-mediated inhibition of arachidonate- or collagen-stimulated platelet aggregation. Methotrexate: Valdecoxib 10 mg BID did not show a significant effect on the plasma exposure or renal clearance of methotrexate. ACE-inhibitors:Reports suggest that NSAIDs may diminish the antihypertensive effect of ACE-inhibitors. This interaction should be given consideration in patients taking BEXTRA concomitantly with ACE-inhibitors. Furosemide: Clinical studies, as well as post-marketing observations, have shown that NSAIDs can reduce the natriuretic effect of furosemide and thiazides in some patients. This response has been attributed to inhibition of renal prostaglandin synthesis. Anticonvulsants (Phenytoin): Steady state plasma exposure (AUC) of valdecoxib (40 mg BID for 12 days) was decreased by 27% when co-administered with multiple doses (300 mg QD for 12 days) of phenytoin (a CYP 3A4 inducer). Patients already stabilized on valdecoxib should be closely monitored for loss of symptom control with phenytoin coadministration. Valdecoxib did not have a statistically significant effect on the pharmacokinetics of phenytoin (a CYP 2C9 and CYP 2C19 substrate). Drug interaction studies with other anticonvulsants have not been conducted. Routine monitoring should be performed when therapy with BEXTRA is either initiated or discontinued in patients on anticonvulsant therapy. Dextromethorphan: Dextromethorphan is primarily metabolized by CYP 2D6 and to a lesser extent by 3A4. Coadministration with valdecoxib (40 mg BID for 7 days) resulted in a significant increase in dextromethorphan plasma levels suggesting that, at these doses, valdecoxib is a weak inhibitor of 2D6. Even so dextromethorphan plasma concentrations in the presence of high doses of valdecoxib were almost 5-fold lower than those seen in CYP 2D6 poor metabolizers suggesting that dose adjustment is not necessary. Lithium: Valdecoxib 40 mg BID for 7 days produced significant decreases in lithium serum clearance (25%) and renal clearance (30%) with a 34% higher serum exposure compared to lithium alone. Lithium serum concentrations should be monitored closely when initiating or changing therapy with BEXTRA in patients receiving lithium. Lithium carbonate (450 mg BID for 7 days) had no effect on valdecoxib pharmacokinetics. Warfarin: The effect of valdecoxib on the anticoagulant effect of warfarin (1 - 8 mg/day) was studied in healthy subjects by coadministration of BEXTRA 40 mg BID for 7 days. Valdecoxib caused a statistically significant increase in plasma exposures of R-warfarin and S-warfarin (12% and 15%, respectively), and in the pharmacodynamic effects (prothrombin time, measured as INR) of warfarin. While mean INR values were only slightly increased with coadministration of valdecoxib, the day-to-day variability in individual INR values was increased. Anticoagulant therapy should be monitored, particularly during the first few weeks, after initiating therapy with BEXTRA in patients receiving warfarin or similar agents. Fluconazole and Ketoconazole: Ketoconazole and fluconazole are predominantly CYP 3A4 and 2C9 inhibitors, respectively. Concomitant single dose administration of valdecoxib 20 mg with multiple doses of ketoconazole and fluconazole produced a significant increase in exposure of valdecoxib. Plasma exposure (AUC) to valdecoxib was increased 62% when coadministered with fluconazole and 38% when coadministered with ketoconazole. Glyburide: Glyburide is a CYP 2C9 substrate. Coadministration of valdecoxib (10 mg BID for 7 days) with glyburide (5 mg QD or 10 mg BID) did not affect the pharmacokinetics (exposure) of glyburide. Coadministration of valdecoxib (40 mg BID (day 1) and 40 mg QD (days 2-7)) with glyburide (5 mg QD) did not affect either the pharmacokinetics (exposure) or the pharmacodynamics (blood glucose and insulin levels) of glyburide. Coadministration of valdecoxib (40 mg BID (day 1) and 40 mg QD (days 2-7)) with glyburide (10 mg glyburide BID) resulted in 21% increase in glyburide AUC0-12 and a 16% increase in glyburide Cmax leading to a 16% decrease in glucose AUC0-24. Insulin parameters were not affected. Because changes in glucose concentrations with valdecoxib coadministration were within the normal variability and individual glucose concentrations were above or near 70 mg/dL, dose adjustment for glyburide (5 mg QD and 10 mg BID) with valdecoxib coadministration (up to 40 mg QD) is not indicated. Coadministration of glyburide with doses higher than 40 mg valdecoxib (e.g., 40 mg BID) have not been studied. Omeprazole: Omeprazole is a CYP 3A4 substrate and CYP 2C19 substrate and inhibitor. Valdecoxib steady state plasma concentrations (40 mg BID) were not affected significantly with multiple doses of omeprazole (40 mg QD). Coadministration with valdecoxib increased exposure of omeprazole (AUC) by 46%. Drugs whose absorption is sensitive to pH may be negatively impacted by concomitant administration of omeprazole and valdecoxib. However, because higher doses (up to 360 mg QD) of omeprazole are tolerated in Zollinger-Ellison (ZE) patients, no dose adjustment for omeprazole is recommended at current doses. Coadministration of valdecoxib with doses higher than 40 mg QD omeprazole has not been studied. Oral Contraceptives: Valdecoxib (40 mg BID) did not induce the metabolism of the combination oral contraceptive norethindrone/ethinyl estradiol (1 mg /35 mcg combination, Ortho-Novum 1/35 ). Coadministration of valdecoxib and Ortho-Novum 1/35 increased the exposure of norethindrone and ethinyl estradiol by 20% and 34%, respectively. Although there is little risk for loss of contraceptive efficacy, the clinical significance of these increased exposures in terms of safety is not known. These increased exposures of norethindrone and ethinyl estradiol should be taken into consideration when selecting an oral contraceptive for women taking valdecoxib. Diazepam: Diazepam (Valium) is a CYP 3A4 and CYP 2C19 substrate. Plasma exposure of diazepam (10 mg BID) was increased by 28% following administration of valdecoxib (40 mg BID) for 12 days, while plasma exposure of valdecoxib (40 mg BID) was not substantially increased following administration of diazepam (10 mg BID) for 12 days. Although the magnitude of changes in diazepam plasma exposure when coadministered with valdecoxib were not sufficient to warrant dosage adjustments, patients may experience enhanced sedative side effects caused by increased exposure of diazepam under this circumstance. Patients should be cautioned against engaging in hazardous activities requiring complete mental alertness such as operating machinery or driving a motor vehicle.",
"role": "user"
},
{
"content": "{\"Brand\": [\"Valium\", \"aspirin\", \"BEXTRA\", \"Aspirin\", \"Anticoagulant\", \"Ortho-Novum\"], \"Drug\": [\"Warfarin\", \"norethindrone\", \"Diazepam\", \"valdecoxib\", \"fluconazole\", \"ethinyl estradiol\", \"Dextromethorphan\", \"ketoconazole\", \"methotrexate\", \"R-warfarin\", \"Lithium carbonate\", \"omeprazole\", \"Fluconazole\", \"phenytoin\", \"Glyburide\", \"furosemide\", \"Furosemide\", \"glyburide\", \"warfarin\", \"Valdecoxib 10 mg\", \"lithium\", \"S-warfarin\", \"Ketoconazole\", \"Omeprazole\", \"Methotrexate\", \"Valdecoxib\", \"dextromethorphan\", \"Phenytoin\", \"diazepam\", \"Lithium\"], \"DrugGroup\": [\"thiazides\", \"Contraceptives\", \"contraceptive\", \"anticonvulsants\", \"NSAIDs\", \"ACE-inhibitors\", \"Anticonvulsants\"], \"UnapprovedDrug\": []}",
"role": "assistant"
}
] |
science.data_reco_mcq_sc
|
science.data_reco_mcq_sc.4869
|
[
{
"content": "Given a short query with keyphrases about a research idea in 2019 in the domain of artificial intelligence, recommend the relevant datasets to validate the research idea from options provided. A short description of the datasets is also provided to help you.\n\nQuery: A simple yet effective method for accurate crowd counting and precise person localization.\n\nKeyphrases: crowd counting images\n\nDataset description:\nIMDB-BINARY: IMDB-BINARY is a movie collaboration dataset that consists of the ego-networks of 1,000 actors/actresses who played roles in movies in IMDB. In each graph, nodes represent actors/actress, and there is an edge between them if they appear in the same movie.\nMinneApple: MinneApple is a benchmark dataset for apple detection and segmentation. The fruits are labelled using polygonal masks for each object instance to aid in precise object detection, localization, and segmentation.\nREDDIT-BINARY: REDDIT-BINARY consists of graphs corresponding to online discussions on Reddit. In each graph, nodes represent users, and there is an edge between them if at least one of them respond to the other’s comment.\nShanghaiTech: The Shanghaitech dataset is a large-scale crowd counting dataset. It consists of 1198 annotated crowd images.\n\nOptions:- A: IMDB-BINARY B: MinneApple C: REDDIT-BINARY D: ShanghaiTech",
"role": "user"
},
{
"content": "ShanghaiTech",
"role": "assistant"
}
] |
science.mltables_te
|
science.mltables_te.6
|
[
{
"content": "\\section{Introduction}\nLarge datasets have made the phenomenal performance of Transformer-based \\citep{attention-is-all-you-need} pre-trained language models (PLMs) possible \\citep{devlin-etal-2019-bert, sanh2019distilbert, liu-2019-roberta, clark2020electra, bommasani2021fm}. \nHowever, recent studies show that a significant fraction of examples in a training dataset can be omitted without sacrificing test accuracy. \nTo this end, many metrics have been introduced for ranking the examples in a dataset based on their importance. \nOne of these metrics is \\emph{Forgetting Score} \\citep{toneva2018empirical-forgetting, yaghoobzadeh-etal-2021-increasing} which recognizes the examples that are misclassified after being correctly classified during training or are always misclassifed. \n\\emph{Datamap} \\citep{swayamdipta-2020-dataset-cartography} is another technique for diagnosing datasets which uses \\emph{confidence} and \\emph{variability} metrics. \n\n\\section{Background}\nIn this section, we describe the two metrics introduced by \\citet{NEURIPS2021_ac56f8fe_datadiet} for pruning the training data: \\textbf{GraNd} and its estimated variant, \\textbf{EL2N}. \n\\subsection{GraNd}\nConsider $\\mathbf{X} = \\{x_i, y_i\\}_{i=1}^N$ to be a training dataset for a given classification task with $K$ classes, where $x_i$ is the input (i.e. a sequence of tokens) and $y_i$ is its corresponding label. \nTo estimate the importance of each training sample $(x_i, y_i)$, \\citet{NEURIPS2021_ac56f8fe_datadiet} propose utilizing the expected value of the loss gradient norm denoted as GraNd:\n\n\\subsection{EL2N}\nBy defining $\\psi^{(k)}(x_i) = \\nabla_{\\boldsymbol{w}}f^{(k)}(x_i)$ for the $k^\\text{th}$ logit,\nthe loss gradient ($\\boldsymbol{g}$) can be written as:\n\nSince the $\\mathcal{L}(f(x_i), y_i)$ is a cross entropy loss, $\\nabla_{f^{(k)}}\\mathcal{L}(f(x_i), y_i)^T=p(x_i)^{(k)} - y^{(k)}_i$ , where $p(x_i)$ is the output probability vector of the model. By assuming orthogonality and uniform sizes across logits for $\\psi$ over the training examples, the norm in Eq. \\ref{eq:grand} is approximately equal to $\\left\\|p(x_i) - \\boldsymbol{y}_i\\right\\|_2$ ($\\boldsymbol{y}_i$ is the one-hot vector of the true label). As a result, an estimate of GraND is \\textbf{EL2N}, which is defined as follows:\n\n\\begin{equation}\n\\label{eq:el2n}\n \\text{EL2N}(x_i, y_i) = \\mathbb{E}_{\\boldsymbol{w}}\\left\\|p(x_i) - \\boldsymbol{y}_i\\right\\|_2\n\\end{equation}\n\n\\section{Experiments}\nIn this section, we verify the effectiveness of GraNd and EL2N \\citep{NEURIPS2021_ac56f8fe_datadiet} in the NLP domain. \nOur experimental setup is similar to \\citet{NEURIPS2021_ac56f8fe_datadiet} for the followings:\n(1) models are trained on the subsets of the data with higher GraNd and EL2N scores;\n(2) based on the expectation over multiple weight initializations mentioned in Eq.~\\ref{eq:grand} and Eq.~\\ref{eq:el2n}, we average the scores over five independent training runs.\\footnote{It is ten independent training runs in \\citet{NEURIPS2021_ac56f8fe_datadiet}}; and,\n(3) for putting our results in context, we utilized random pruning as the baseline\\footnote{Random selection with the same proportion} and the accuracy of training on the whole dataset with no pruning as the target performance.\n\nTwo major differences between these two setups are mentioned here:\n(1) we used a pre-trained model, i.e., BERT \\citep{devlin-etal-2019-bert}, standard in the NLP domain, whereas \\citet{NEURIPS2021_ac56f8fe_datadiet} uses vision models initialized with random weights; and\n(2) as fine-tuning requires few epochs of training, we computed the metrics over fine-grained steps rather than epochs.\n\n\\section{Appendix}\n\\subsection{Dataset samples and their scores}\nExamples with the highest and lowest EL2N scores are provided in Table~\\ref{tab:examples-agnews} and Table~\\ref{tab:examples-mnli} for AG News and MNLI datasets. \n\n\\begin{table*}[h]\n\\centering\n\\small\n\\begin{tabular}{lp{7.2cm}ccc}\n\\toprule \n & \\textbf{Instance} & \\textbf{EL2N} & \\textbf{Gold Label} & \\textbf{Prediction}\\\\\n\\midrule \n\\multirow{15}{*}{\\rotatebox{90}{Noise / Hard}}\n& Tom Hauck/Getty Images 1. Rapper Snoop Dogg attended practice Tuesday with No. 1 USC as a guest wide receiver. The Trojans were delighted by the star \\#39;s presence, as were a group of pee-wee football players watching practice that day.\n& 1.413128\n& Business (2)\n& Sports (1)\n\\\\\\cmidrule[0.01em](lr){2-5}\n& Palestinian gunmen kidnap CNN producer GAZA CITY, Gaza Strip -- Palestinian gunmen abducted a CNN producer in Gaza City on Monday, the network said. The network said Riyadh Ali was taken away at gunpoint from a CNN van.\n& 1.412653\n& Sports (1)\n& World (0)\n\\\\\\cmidrule[0.01em](lr){2-5}\n& Names in the Game Dressed in jeans and a white shirt, the men \\#39;s all-around champion in Athens appeared relaxed as he helped promote a 14-city gymnastics exhibition tour that kicks off at the Mohegan Sun Casino on Tuesday.\n& 1.412590\n& Business (2)\n& Sports (1)\n\\\\\\midrule[0.03em]\n\\multirow{14}{*}{\\rotatebox{90}{Easy}}\n& Vikings \\#39; Moss practices despite hamstring injury Eden Prairie, MN (Sports Network) - Minnesota Vikings wide receiver Randy Moss practiced on Wednesday despite nursing a strained right hamstring and is listed as questionable for this Sunday \\#39;s game against the New York Giants.\n& 0.000665\n& Sports (1)\n& Sports (1)\n\\\\\\cmidrule[0.01em](lr){2-5}\n& Cassell a no-show; Wolves sign Griffin Minneapolis, MN (Sports Network) - Minnesota Timberwolves guard Sam Cassell did not show up Tuesday for the first day of training camp.\n& 0.000653\n& Sports (1)\n& Sports (1)\n\\\\\\cmidrule[0.01em](lr){2-5}\n& Rockets activate Lue from injured list Houston, TX (Sports Network) - The Houston Rockets activated guard Tyronn Lue from the injured list prior to Wednesday \\#39;s game against the Hawks.\t\n& 0.000649\n& Sports (1)\n& Sports (1)\n \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{\nExamples from AG News belonging to different score regions of EL2N. The highest scoring instances are mostly noisy samples, while the least scoring instances are very easy to learn.\n} \n\\label{tab:examples-agnews}\n\\end{table*}\n\nYou are provided with a table and supplementary text from a machine learning paper. Here are JSON templates for four types of numeric cells: \"Other\", \"Data Stat.\", \"Hyper-parameter/Architecture\", and \"Result\":\n{\"value\": \"xx\", \"type\": \"Other\"}\n{\"value\": \"xx\", \"type\": \"Data Stat.\", \"dataset\": \"xx\", \"attribute name\": \"xx\", \"sub-set/group name\": \"xx\", \"dataset features\": {\"xx\": \"yy\"}}\n{\"value\": \"xx\", \"type\": \"Hyper-parameter/Architecture\", \"model\": \"xx\", \"parameter/architecture name\": \"xx\", \"dataset\": \"xx\"}\n{\"value\": \"xx\", \"type\": \"Result\", \"task\": \"xx\", \"metric\": \"xx\", \"training data/set\": \"xx\", \"test data/set\": \"xx\", \"model\": \"xx\", \"model settings\": {\"xx\": \"yy\"}, \"experimental settings\": {\"xx\": \"yy\"}}\n\nPlease describe all numeric cells in the above latex table following the JSON templates (proceeding by row in a left-right, top-down direction). For each cell, output one JSON description per line. For any unanswerable attributes in the templates, set their value to the placeholder \"xx\" if it is of string type and {\"xx\": \"yy\"} if it is of dictionary type. Numeric cells, which describe performance/error analysis, should be labeled as \"Other\".\n\nCell Description:",
"role": "user"
},
{
"content": "{\"value\": \"1.413128\", \"type\": \"Other\"}\n{\"value\": \"1.412653\", \"type\": \"Other\"}\n{\"value\": \"1.412590\", \"type\": \"Other\"}\n{\"value\": \"0.000665\", \"type\": \"Other\"}\n{\"value\": \"0.000653\", \"type\": \"Other\"}\n{\"value\": \"0.000649\", \"type\": \"Other\"}",
"role": "assistant"
}
] |
science.ddi_ner
|
science.ddi_ner.65
|
[
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Brand\", \"UnapprovedDrug\", \"Drug\", \"DrugGroup\"].\n\nPlease return the output as a JSON object of the format: {\"Brand\" : [\"Aspirin\", ...], \"DrugGroup\": [\"Salicyclate\", ...], \"Drug\": [\"Phenobarbital\", ...], \"UnapprovedDrug\": [\"Heroin\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nUricosuric Agents: Aspirin may decrease the effects of probenecid, sulfinpyrazone, and phenylbutazone. Alcohol: Has a synergistic effect with aspirin in causing gastrointestinal bleeding. Corticosteroids: Concomitant administration with aspirin may increase the risk of gastrointestinal ulceration and may reduce serum salicylate levels. Pyrazolone Derivatives (phenylbutazone, oxyphenbutazone, and possibly dipyrone): Concomitant administration with aspirin may increase the risk of gastrointestinal ulceration. Nonsteroidal Antiinflammatory Agents: Aspirin is contraindicated in patients who are hypersensitive to nonsteroidal anti-inflammatory agents. Urinary Alkalinizers: Decrease aspirin effectiveness by increasing the rate of salicylate renal excretion. Phenobarbital: Decreases aspirin effectiveness by enzyme induction. Phenytoin: Serum phenytoin levels may be increased by aspirin. Propranolol: May decrease aspirins anti-inflammatory action by competing for the same receptors. Antacids: Enteric Coated Aspirin should not be given concurrently with antacids, since an increase in the pH of the stomach may effect the enteric coating of the tablets.",
"role": "user"
},
{
"content": "{\"Brand\": [\"aspirins\", \"aspirin\", \"Aspirin\"], \"Drug\": [\"phenytoin\", \"Alcohol\", \"oxyphenbutazone\", \"Phenobarbital\", \"dipyrone\", \"Phenytoin\", \"Propranolol\", \"phenylbutazone\", \"Nonsteroidal Antiinflammatory\", \"aspirin\", \"Aspirin\", \"nonsteroidal anti-inflammatory\", \"probenecid\", \"sulfinpyrazone\"], \"DrugGroup\": [\"Corticosteroids\", \"Antacids\", \"Uricosuric Agents\", \"antacids\", \"salicylate\", \"Pyrazolone Derivatives\"], \"UnapprovedDrug\": []}",
"role": "assistant"
}
] |
science.mslr2022_cochrane_multidoc_summarization
|
science.mslr2022_cochrane_multidoc_summarization.1051
|
[
{
"content": "You will be provided with the titles and abstracts of a collection of medical RCT reports studying a clinical question -- for instance, the efficacy of an intervention on some medical outcome. Please write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article. Your response should not include any other text. The RCT reports are below.\n\n\n\nTitle 1:\nRandomized efficacy trial of early preconception counseling for diabetic teens (READY-girls).\nAbstract 1:\nTo develop and assess the feasibility of an early preconception counseling program for adolescents called READY-Girls (Reproductive-health Education and Awareness of Diabetes in Youth for Girls).\n A total of 53 adolescent females with type 1 diabetes between 16 and 19.9 years of age were randomized into groups receiving a CD-ROM, a book, or standard care (control) and given one comprehensive session. Outcomes were assessed at baseline, immediately after, and at 3 months.\n Teens who received the CD and those who received the book demonstrated significant (P < or = 0.05) sustained improvement (over 3 months) in knowledge, perceived benefits of both receiving preconception counseling and using effective family planning, and perceived more support with reproductive health issues.\n Clinical feasibility of the program was demonstrated. Both the CD and the book appeared to be efficacious formats for the short term. Future studies should examine repeated boosters of a CD and a book, which are not meant to replace but rather to reinforce and supplement health professional education.",
"role": "user"
},
{
"content": "Little evidence is available to recommend for or against preconception care for women with pre-existing diabetes. Further large, high-quality randomised controlled trials are needed to evaluate the effect of different protocols of preconception care for women with pre-existing diabetes.",
"role": "assistant"
}
] |
science.qasa_abstractive_qa
|
science.qasa_abstractive_qa.302
|
[
{
"content": "You will be shown an excerpt from a computer science scientific research paper, followed by a question about the paper. Please answer the question. Do not include any text in your response other than the answer.\n\nContext: We examine different attention models (global, local-m, local-p) and differentalignment functions (location, dot, general, concat) as described inSection 3. Due to limitedresources, we cannot run all the possible combinations.However, results in Table 4 do give us some idea aboutdifferent choices.The location-based function does not learn goodalignments: the global (location) model can only obtain a smallgain when performing unknown word replacement compared to using other alignmentfunctions.141414There is a subtle difference in how we retrieve alignmentsfor the different alignment functions. At time step t in which we receivey_{t-1} as input and then compute \\mbox{\\boldmath{$h$}}_{t},\\mbox{\\boldmath{$a$}}_{t},\\mbox{\\boldmath{$c$}}_{t}, and \\mbox{\\boldmath{$\\tilde{h}$}}_{t} beforepredicting y_{t}, the alignment vector \\mbox{\\boldmath{$a$}}_{t} is used as alignmentweights for (a) the predicted word y_{t} in the location-basedalignment functions and (b) the input word y_{t-1} in the content-basedfunctions.For content-based functions, our implementation concat does not yield good performancesand more analysis should be done to understand thereason.151515With concat, the perplexities achieved by different models are 6.7 (global), 7.1(local-m), and 7.1 (local-p). Such high perplexities could be due to the factthat we simplify the matrix W_{a} to set the part that corresponds to \\mbox{\\boldmath{$\\bar{h}$}}_{s}to identity. It is interesting to observe that dot workswell for the global attention and general is better for the localattention.Among the different models, the local attention model with predictive alignments (local-p) is best, both in terms of perplexities and BLEU.\n\nQuestion: What kinds of alignment functions are used for their attention-based models?",
"role": "user"
},
{
"content": "Authors used location, dot, general and concat alignment functions in their experiments.",
"role": "assistant"
}
] |
science.bioasq_yesno_qa
|
science.bioasq_yesno_qa.704
|
[
{
"content": "You have to answer a biomedical question in binary format, i.e. only yes and no are the only acceptable answer formats. A list of paragraphs is provided as context to help you answer the question.\n\nQuestion: Can CD55 deficiency cause thrombosis?\n\nContext: The loss of CD55 and CD59 renders PNH erythrocytes susceptible to intravascular haemolysis, which can lead to thrombosis and to much of the morbidity and mortality of PNH.\nCD55 Deficiency, Early-Onset Protein-Losing Enteropathy, and Thrombosis.\nCONCLUSIONS: CD55 deficiency with hyperactivation of complement, angiopathic thrombosis, and protein-losing enteropathy (the CHAPLE syndrome) is caused by abnormal complement activation due to biallelic loss-of-function mutations in CD55.\nIt is caused by the expansion of a hematopoietic progenitor cell that has acquired a mutation in the X-linked phosphatidylinositol glycan class A (PIGA) gene that results in deficiency of the glycosylphosphatidylinositol anchor structure responsible for fixing a wide spectrum of proteins particularly CD55 and CD59. The clinical features of this disease arise as a result of complement-mediated hemolysis in unprotected red cells, leukocytes, and platelets as well as the release of free hemoglobin. Patients may present with a variety of clinical manifestations, such as anemia, thrombosis, kidney disease, smooth muscle dystonias, abdominal pain, dyspnea, and extreme fatigue.\nThe lack of one of the GPI-AP complement regulatory proteins (CD55, CD59) leads to hemolysis. The disease is diagnosed with hemolytic anemia, marrow failure and thrombosis.\nParoxysmal nocturnal hemoglobinuria (PNH) is a rare bone marrow failure disorder that manifests with hemolytic anemia, thrombosis, and peripheral blood cytopenias. The absence of two glycosylphosphatidylinositol (GPI)-anchored proteins, CD55 and CD59, leads to uncontrolled complement activation that accounts for hemolysis and other PNH manifestations.\nRESULTS: CD55 and/or CD59 deficiencies were found in 1.6% (2/127) of patients with primary BCS, 1.0% (1/100) of non-malignant and non-cirrhotic patients with PVT, and 4.7% (4/85) of cirrhotic patients with PVT.\nData of this study indicate that the PNH defect as detected with CD55, CD59, and CD16 is not an important cause of intra-abdominal thrombosis in northwestern India.\nPNH testing of red blood cells revealed a CD55 and CD59 deficiency consistent with PNH in both cases. The systemic complications typically associated with thrombosis were not observed for the following several months with early conservative treatments including eculizumab.\nDeficiency of the GPI-anchored complement inhibitors CD55 and CD59 on erythrocytes leads to intravascular hemolysis upon complement activation. Apart from hemolysis, another prominent feature is a highly increased risk of thrombosis.\nGenetic reconstitution of CD55 or treatment with a complement-inhibitory therapeutic antibody reversed abnormal complement activation.<br><b>CONCLUSIONS</b>: CD55 deficiency with hyperactivation of complement, angiopathic thrombosis, and protein-losing enteropathy (the CHAPLE syndrome) is caused by abnormal complement activation due to biallelic loss-of-function mutations in CD55.\nParoxysmal nocturnal hemoglobinuria (PNH) is an acquired clonal disorder characterized by a decrease or absence of glycosylphosphatidylinositol (GPI)-anchored molecules such as CD55 and CD59 from the surface of affected cells, resulting in intravascular hemolysis, cytopenia, and venous thrombosis.\nCONCLUSIONS CD55 deficiency with hyperactivation of complement, angiopathic thrombosis, and protein-losing enteropathy (the CHAPLE syndrome) is caused by abnormal complement activation due to biallelic loss-of-function mutations in CD55.\nCD55 deficiency with hyperactivation of complement, angiopathic thrombosis, and protein-losing enteropathy (the CHAPLE syndrome) is caused by abnormal complement activation due to biallelic loss-of-function mutations in CD55.\nCD55 Deficiency, Early-Onset Protein-Losing Enteropathy, and Thrombosis.",
"role": "user"
},
{
"content": "yes",
"role": "assistant"
}
] |
science.mslr2022_cochrane_multidoc_summarization
|
science.mslr2022_cochrane_multidoc_summarization.2992
|
[
{
"content": "You will be provided with the titles and abstracts of a collection of medical RCT reports studying a clinical question -- for instance, the efficacy of an intervention on some medical outcome. Please write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article. Your response should not include any other text. The RCT reports are below.\n\n\n\nTitle 1:\n[Intra-umbilical administration of oxytocin in the treatment of retained placenta].\nAbstract 1:\nnan\n\n\nTitle 2:\nThe effect of oxytocin injection into the umbilical vein for the management of the retained placenta.\nAbstract 2:\nIn a single-blind study 51 patients with retention of the placenta were randomized into one of three groups: Group 1 was given 10 IU of oxytocin in 10 ml of sodium chloride into the umbilical vein; group 2 was given 10 ml of sodium chloride; group 3 was treated with manual removal of the placenta. No significant differences were recorded in groups 1 and 2, and no advantages were found in comparison with the procedure normally used.\n\n\nTitle 3:\nAvoiding manual removal of placenta: evaluation of intra-umbilical injection of uterotonics using the Pipingas technique for management of adherent placenta.\nAbstract 3:\nManual removal of placenta is performed in 1-3% of cases, and whilst a well established and relatively safe procedure, it is not without complications, which include infection, hemorrhage, uterine rupture, and occasional maternal death.\n A three-arm randomized controlled trial of 50 IU Syntocinon (in 30 ml N saline) versus 800 mcg misoprostol (in 30 ml N saline) versus 30 ml N saline alone (control), injected into the placental bed via the umbilical vein using the Pipingas method. A group sequential research model (triangular test: PEST4) was adopted to minimize the sample size, as retained placenta is a relatively uncommon condition.\n No significant difference in the rate of manual removal was observed between the control and Syntocinon groups. On triggering the automatic stopping rule for this arm of the trial all subsequent cases recruited were allocated to receive either Syntocinon or misoprostol. After a total of 54 cases a significant reduction in manual removal of placenta was observed in the misoprostol group, triggering the automatic stopping rule and terminating the trial.\n Misoprostol (800 mcg) dissolved in 30 ml N saline and administered by intraumbilical injection using the Pipingas technique significantly reduces the need for manual removal for retained adherent placenta, whereas Syntocinon has similar effectiveness to injection of N saline alone.\n\n\nTitle 4:\nIntraumbilical oxytocin for the management of retained placenta: a randomized controlled trial.\nAbstract 4:\nTo evaluate the ability of intraumbilical oxytocin injection as a treatment for retained placenta after vaginal delivery to reduce the incidence of manual removal and postpartum hemorrhage.\n A randomized controlled trial was set up in a university and a district general hospital. We recruited 81 women with singleton pregnancies who underwent vaginal delivery and who failed to deliver the placenta after 20 minutes of active management of the third stage of labor. Study subjects were randomized to receive either 1) an intraumbilical injection of oxytocin (20 IU in 20 mL of saline); 2) an intraumbilical injection of saline (20 mL); or 3) no treatment. Outcome measures were expulsion of the placenta within 45 minutes of delivery, need for manual removal of the placenta under anesthesia, and postpartum hemorrhage (defined as a blood loss greater than 500 mL).\n Women given an intraumbilical injection of oxytocin had a significant increase in spontaneous expulsion of the placenta within 45 minutes of delivery and fewer manual removals of the placenta, compared with women without treatment (odds ratio [OR] 11.6, 99% confidence interval [CI] 1.4, 272.8; and OR 7.4, 99% CI 1.1, 86.5; respectively). When women given intraumbilical oxytocin were compared with women given only intraumbilical saline, the difference was not statistically significant (OR 6.6, 99% CI 0.9, 77.2 for spontaneous expulsion of the placenta; and OR 4.7, 99% CI 0.8, 39.5 for manual removal). There was no significant difference in the incidence of spontaneous expulsion and manual removal of the placenta between women given intraumbilical saline injection and women without treatment (OR 1.8, 99% CI 0.1, 53.9; and OR 1.6, 99% CI 0.1, 22.4; respectively).\n The results of our study suggest a clinically important beneficial effect of intraumbilical oxytocin injection in the management of retained placenta.\n\n\nTitle 5:\nIntraumbilical vein injection of prostaglandin F2 alpha in retained placenta.\nAbstract 5:\nA randomized protocol was used to study the effect of intraumbilical prostaglandin F2 alpha (Hembate, Upjohn) and oxytocin injection in women with retained placenta. Prostaglandin F2 alpha, 20 mg, diluted to 20 ml in normal saline solution (10 women, group 1), 30 IU of oxytocin, diluted to 20 ml in normal saline solution (11 women, group 2), or 20 ml of normal saline solution alone (7 women, group 3), were injected into the umbilical vein 1 h after delivery. Nine women (group 4, controls) underwent manual removal of the retained placenta. In group 1, placental expulsion occurred in all patients and the duration of the placental expulsion after prostaglandin F2 alpha injection was 6.8 +/- 1.36 (mean +/- SE) min: in group 2, six placental expulsions occurred after 13.3 +/- 1.97 min (mean +/- SE); and in group 3, no effect was recorded after intraumbilical saline injection. We suggest that intraumbilical vein injection of prostaglandin F2 alpha might be a beneficial, non-surgical method for treating retained placenta. Oxytocin might reduce the incidence of manual lysis of the placenta and achieve partial success.\n\n\nTitle 6:\nIntra-umbilical vein injection and retained placenta: evidence from a collaborative large randomised controlled trial. Grupo Argentino de Estudio de Placenta Retenida.\nAbstract 6:\nTo determine whether intra-umbilical vein injection with saline solution, with or without oxytocin, reduces the need for manual removal of placenta compared with expectant management.\n Multicenter, randomised controlled trial.\n Eleven hospitals in four cities of Argentina: Buenos Aires, Corrientes, Rosario, and Salta.\n Two hundred and ninety-one women showing no evidence of placental separation thirty minutes after vaginal delivery.\n Three different management strategies: 1. intra-umbilical vein injection of saline solution plus oxytocin; 2. intra-umbilical vein injection of saline solution alone; and 3. expectant management.\n Primary: manual removal of the placenta. Secondary: blood loss after trial entry, haemoglobin level at 24 to 48 hours and at 40 to 45 days after delivery, blood transfusion, curettage, infection, and days of hospital stay.\n Rates of subsequent manual removal were similar: intra-umbilical vein injection of saline solution plus oxytocin (58%; RR 0.92; 95% CI 0.73-1.15), or saline alone (63%; RR 1.00; 95% CI 0.80-1.24), compared with expectant management (63%). There were also no detectable effects of the active managements on any of the secondary measures of outcome.\n Based on evidence available from randomised controlled trials, including this trial, it is unlikely that intra-umbilical injection with or without oxytocin, is clinically useful. We recommend that this intervention should not be used in third stage management of labour.\n\n\nTitle 7:\nIs there a place for intra-umbilical oxytocin for the management of retained placenta?\nAbstract 7:\nIntra-umbilical injection of oxytocin has been used to hasten placental separation in retained placenta. A randomised controlled trial was done on 35 consequent women who fulfilled the criteria for retained placenta at the Department of Obstetrics & Gynaecology Ipoh Hospital. Nineteen patients who were recruited into the study group received intraumbilical injection of 301U oxytocin in 27mls saline. Another 16 patients who were in the control group received 30mls of 0.9% sodium chloride (placebo). The primary outcome measured was the need for manual removal of placenta (MRP). Nine out of the 19 patients in the oxytocin group required MRP while 10/16 in the control group required MRP. There was a 24% reduction (95% C.I. 0.41 to 1.39) in the need for MRP in the study group compared to the saline group. our results indicate that intra-umbilical vein injection of oxytocin is not clinically useful for the removal of a retained placenta.\n\n\nTitle 8:\nRemoving a retained placenta by oxytocin--a controlled study.\nAbstract 8:\nNo increase in maternal plasma oxytocin concentration was detected after administration of 100 IU oxytocin into the umbilical veins of seven women immediately after delivery. The delivery of the placenta was accelerated after umbilical vein injection of 100 IU oxytocin in a placebo-controlled study of 40 women: 12 minutes (4 to 40) in the oxytocin group versus 40 minutes (29 to 40) in the placebo group (median and interquartile ranges), p less than 0.05.\n\n\nTitle 9:\nIntraumbilical oxytocin for management of retained placenta.\nAbstract 9:\nThe purpose of the study was to investigate the removal of a retained placenta.\n Oxytocin was injected into the vein of the umbilical cord. A total of 109 patients with retention of the placenta were randomized into two groups. Active management of the third stage of labor was carried out by giving oxytocin 5 IU intravenously and ergometrine maleate 0.2 mg intramuscularly after delivery of the fetus. Group 1, which comprised 68 patients, was allocated to receive 50 IU oxytocin diluted in 10 ml 0.9% sodium chloride solution, and the 41 patients in group 2 were given 20 ml plasma expander (dextran 70) into the umbilical vein.\n Forty-nine cases (72%) in the oxytocin group and 22 cases (54%) in the dextran 70 group required manual removal of the retained placenta. No significant differences were found between group 1 (oxytocin) and group 2 (dextran 70).\n Our results indicate that intraumbilical vein injection of oxytocin is not effective for removal of a retained placenta.\n\n\nTitle 10:\nUmbilical vein administration of oxytocin for the management of retained placenta: is it effective?\nAbstract 10:\nIn a multicenter randomized controlled trial involving 220 women with retained placenta no beneficial effects could be established of intraumbilical vein administration of 10 IU of oxytocin in 20 ml of saline solution. A reduction was not gained in the rate of manual removal of the placenta and there was no decrease in the amount of blood loss. Oxytocin only induced a minor shortening of the median time interval from administration to the spontaneous expulsion of the placenta as compared with a placebo injection. Maternal serum alpha-fetoprotein levels before and after intraumbilical vein injection did not show evidence of fetomaternal transfusion.",
"role": "user"
},
{
"content": "UVI of oxytocin solution is an inexpensive and simple intervention that could be performed while placental delivery is awaited. However, high-quality randomized trials show that the use of oxytocin has little or no effect. Further research into the optimal timing of manual removal and into UVI of prostaglandins or plasma expander is warranted.",
"role": "assistant"
}
] |
science.scitldr_aic
|
science.scitldr_aic.952
|
[
{
"content": "Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.\n\nPaper text:\n\nDuring the last years, a remarkable breakthrough has been made in AI domain\n\nthanks to artificial deep neural networks that achieved a great success in many\n\nmachine learning tasks in computer vision, natural language processing, speech\n\nrecognition, malware detection and so on.\nHowever, they are highly vulnerable\n\nto easily crafted adversarial examples.\nMany investigations have pointed out this\n\nfact and different approaches have been proposed to generate attacks while adding\n\na limited perturbation to the original data.\nThe most robust known method so far\n\nis the so called C&W attack [1].\nNonetheless, a countermeasure known as fea-\n\nture squeezing coupled with ensemble defense showed that most of these attacks\n\ncan be destroyed [6].\nIn this paper, we present a new method we call Centered\n\nInitial Attack (CIA) whose advantage is twofold : first, it insures by construc-\n\ntion the maximum perturbation to be smaller than a threshold fixed beforehand,\n\nwithout the clipping process that degrades the quality of attacks.\nSecond, it is\n\nrobust against recently introduced defenses such as feature squeezing, JPEG en-\n\ncoding and even against a voting ensemble of defenses.\nWhile its application is\n\nnot limited to images, we illustrate this using five of the current best classifiers\n\non ImageNet dataset among which two are adversarialy retrained on purpose to\n\nbe robust against attacks.\nWith a fixed maximum perturbation of only 1.5% on\n\nany pixel, around 80% of attacks (targeted) fool the voting ensemble defense and\n\nnearly 100% when the perturbation is only 6%.\nWhile this shows how it is difficult\n\nto defend against CIA attacks, the last section of the paper gives some guidelines\n\nto limit their impact.\nSince the skyrocketing of data volumes and parallel computation capacities with GPUs during the last years, deep neural networks (DNN) have become the most effective approaches in solving many machine learning problems in several domains like computer vision, speech recognition, games playing etc.\nThey are even intended to be used in critical systems like autonomous vehicle BID17 , BID18 .\nHowever, DNN as they are currently built and trained using gradient based methods, are very vulnerable to attacks a.k.a. adversarial examples BID1 .\nThese examples aim to fool a classifier to make it predict the class of an input as another one, different from the real class, after bringing only a very limited perturbation to this input.\nThis can obviously be very dangerous when it comes to systems where human life is in stake like in self driven vehicles.\nCompanies IT networks and plants are also vulnerable if DNN based intrusion detection systems were to be deployed BID20 .Many\napproaches have been proposed to craft adversarial examples since the publication by Szegedy et al. of the first paper pointing out DNN vulnerability issue BID4 . In their\nwork, they generated adversarial examples using box-constrained L-BFGS. Later in\nBID1 , a fast gradient sign method (FGSM) that uses gradients of a loss function to determine in which direction the pixels intensity should be changed is presented. It is designed\nto be fast not optimize the loss function. Kurakin et al.\nintroduced in BID13 a straightforward simple improvement of this method where instead of taking a single step of size in the direction of the gradient-sign, multiple smaller steps are taken, and the result is clipped in the end. Papernot et al.\nintroduced in BID3 an attack, optimized under L0 distance, known as the Jacobian-based Saliency Map Attack (JSMA). Another simple\nattack known as Deepfool is provided in [34] . It is an untargeted\nattack technique optimized for the L2 distance metric. It is efficient and\nproduces closer adversarial examples than the L-BFGS approach discussed earlier. Evolutionary algorithms\nare also used by authors in BID14 to find adversarial example while maintaining the attack close to the initial data. More recently, Carlini\nand Wagner introduced in BID0 the most robust attack known to date as pointed out in BID5 . They consider different\noptimization functions and several metrics for the maximum perturbation. Their L2-attack defeated\nthe most powerful defense known as distillation BID7 . However, authors in BID6\nshowed that feature squeezing managed to destroy most of the C&W attacks. Many other defenses have\nbeen published, like adversarial training BID3 , gradient masking BID9 , defenses based on uncertainty using dropout BID10 as done with Bayesian networks, based on statistics BID11 , BID12 , or principal components BID22 , BID23 . Later, while we were carrying\nout our investigation, paper BID16 showed that not less than ten defense approaches, among which are the previously enumerated defenses, can be defeated by C&W attacks. It also pointed out that feature\nsqueezing also can be defeated but no thorough investigation actually was presented. Another possible defense but not\ninvestigated is based on JPEG encoding when dealing with images. It has never been explicitly attacked\neven after it is shown in BID13 that most attacks are countered by this defense. Also, to our knowledge, no investigation\nhas been conducted when dealing with ensemble defenses. Actually, attacks transferability between\nmodels that is well investigated and demonstrated in BID19 in the presence of an oracle (requesting defense to get labels to train a substitute model) is not guaranteed at all when it is absent. Finally, when the maximum perturbation added\nto the original data is strictly limited, clipping is needed at the end of training (adversarial crafting) even if C&W attacks are used. The quality of crafted attacks is therefore\ndegraded as the brought perturbation during the training is brutally clipped. We tackle all these points in our work while\nintroducing a new attack we call Centered Initial Attack (CIA). This approach considers the perturbation limits\nby construction and consequently no alteration is done on the CIA resulting examples.To make it clearer for the reader, an example is given below to illustrate the clipping issue. FIG0 shows a comparison between CIA and C&W L2\nattack before and after clipping on an example, a guitar targeted as a potpie with max perturbation equal to 4.0 (around 1.5%). The same number of iterations FORMULA4 is considered\nfor both methods. As can be seen on FIG0 , CIA generates the best attack\nwith 96% confidence. C&W is almost as good with a score of 95% but it is degraded\nto 88% after applying the clipping to respect the imposed max perturbation. Avoiding this degradation due to clipping is the core motivation\nof our investigation.The remaining of this paper is organized as follows. Section I presents some mathematical formulations and the principle\nof CIA strategy. Then Section II investigates the application of CIA against ensemble\ndefense, feature squeezing and JPEG encoding defenses. Then Section III provides some guidelines to limit the impact of CIA\nattacks. Finally, we give some possible future investigations in the conclusion\n.\nIn this paper we presented a new strategy called CIA for crafting adversarial examples while insuring the maximum perturbation added to the original data to be smaller than a fixed threshold.\nWe demonstrated also its robustness against some defenses, feature squeezing, ensemble defenses and even JPEG encoding.\nFor future work, it would be interesting to investigate the transferability of CIA attacks to the physical world as it is shown in BID13 that only a very limited amount of FGDM attacks, around 20%, survive this transfer.\nAnother interesting perspective is to consider partial crafting attacks while selecting regions taking into account the content of the data.\nWith regard to images for instance, this would be interesting to hide attacks with big but imperceptible perturbations.",
"role": "user"
},
{
"content": "In this paper, a new method we call Centered Initial Attack (CIA) is provided. It insures by construction the maximum perturbation to be smaller than a threshold fixed beforehand, without the clipping process.",
"role": "assistant"
}
] |
science.chia_ner
|
science.chia_ner.910
|
[
{
"content": "You will be shown inclusion or exclusion criteria for a clinical trial. Given these criteria, your task is to extract all unique entities of the following types: [\"Observation\", \"Condition\", \"Procedure\", \"Drug\", \"Temporal\", \"Multiplier\", \"Qualifier\", \"Negation\", \"ReferencePoint\", \"Scope\", \"Value\", \"Mood\", \"Device\", \"Visit\", \"Measurement\", \"Person\"].\n\nPlease return the output as a JSON object of the format: {\"Condition\" : [\"breast cancer\", ...], \"Device\" : [\"ports\", ...], \"Drug\" : [\"Atenolol\", ...], \"Measurement\" : [\"blood pressure\", ...], \"Mood\" : [\"suspected\", ...], \"Multiplier\" : [\"more than two\", ...], \"Negation\" : [\"no\", ...], \"Observation\" : [\"history\", ...], \"Person\" : [\"male\", ...], \"Procedure\" : [\"surgery\", ...], \"Qualifier\" : [\"severe\", ...], \"ReferencePoint\" : [\"enrollment\", ...], \"Scope\" : [\"infection\", ...], \"Temporal\" : [\"one day\", ...], \"Value\" : [\"50%\", ...], \"Visit\" : [\"ICU\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nNA",
"role": "user"
},
{
"content": "{\"Condition\": [], \"Device\": [], \"Drug\": [], \"Measurement\": [], \"Mood\": [], \"Multiplier\": [], \"Negation\": [], \"Observation\": [], \"Person\": [], \"Procedure\": [], \"Qualifier\": [], \"ReferencePoint\": [], \"Scope\": [], \"Temporal\": [], \"Value\": [], \"Visit\": []}",
"role": "assistant"
}
] |
science.genia_ner
|
science.genia_ner.1396
|
[
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"RNA\", \"CellLine\", \"Protein\", \"DNA\", \"CellType\"].\n\nPlease return the output as a JSON object of the format: {\"DNA\" : [\"TAL1\", ...], \"RNA\": [\"mRNAs\", ...], \"CellLine\": [\"T cells\", ...], \"CellType\": [\"monocytes\", ...], \"Protein\": [\"IL-2\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nInterferon-gamma-induced factor binding to the interleukin-4-responsive element of CD23b promoter in human tonsillar mononuclear cells: role in transient up-regulation of the interleukin-4-induced CD23b mRNA. Stimulation of human tonsillar mononuclear cells with interleukin-4 (IL-4) and interferon-gamma (IFN-gamma) rapidly induced the activation of distinct nuclear factors with different mobilities, both of which bind the IL-4 response element (IL-4RE) of CD23b promoter as examined by electrophoretic mobility shift assays (EMSA). Co-treatment of IL-4 and IFN-gamma induced, in addition to the two distinct complexes, a new complex with an intermediate mobility. The IL-4-induced complex reacted with anti-STAT (signal transducers and activators of transcription) 6, resulting in a supershift whereas the formation of the IFN-gamma-induced complex was inhibited by anti-STAT 1. The intermediate complex appeared to react with both anti-STAT 6 and anti-STAT 1. Although IFN-gamma alone did not induce CD23 mRNA transcription, Northern blot analysis revealed a transient up-regulation of the IL-4-induced CD23 mRNA by IFN-gamma within 2 h of IFN-gamma treatment in these tonsillar cells. The results suggest that the IL-4RE of the IL-4-inducible gene can accommodate both IL-4-and IFN-gamma-activated factors, such as STAT 6 and STAT 1, either in homodimeric or heterodimeric forms and the binding of these different proteins to the respective promoter may play a potential regulatory role in the IL-4-inducible gene expression.",
"role": "user"
},
{
"content": "{\"CellLine\": [], \"CellType\": [\"tonsillar cells\", \"these tonsillar cells\", \"human tonsillar mononuclear cells\"], \"DNA\": [\"CD23b promoter\", \"interleukin - 4 - responsive element\", \"IL - 4 - inducible gene\", \"IL - 4 response element\", \"IL - 4RE\"], \"Protein\": [\"nuclear factors\", \"IL - 4\", \"interferon - gamma\", \"STAT 6\", \"STAT 1\", \"anti - STAT 6\", \"IFN - gamma\", \"IL - 4 - induced complex\", \"Interferon - gamma - induced factor\", \"interleukin - 4\", \"STAT (signal transducers and activators of transcription) 6\", \"anti - STAT (signal transducers and activators of transcription) 6\", \"anti - STAT 1\", \"homodimeric or heterodimeric forms\", \"IL - 4 - and IFN - gamma - activated factors\", \"IFN - gamma - induced complex\"], \"RNA\": [\"interleukin - 4 - induced CD23b mRNA\", \"CD23b mRNA\", \"CD23 mRNA\", \"IL - 4 - induced CD23 mRNA\"]}",
"role": "assistant"
}
] |
science.data_reco_mcq_mc
|
science.data_reco_mcq_mc.1862
|
[
{
"content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: CODA is a novel adversarial learning approach for crowd counting adaptation. We first use a multi-scale\n\nKeyphrases: crowd counting images\n\nDataset description:\nMulti Task Crowd: Multi Task Crowd is a new 100 image dataset fully annotated for crowd counting, violent behaviour detection and density level classification.\nShanghaiTech: The Shanghaitech dataset is a large-scale crowd counting dataset. It consists of 1198 annotated crowd images.\nCell: The CELL benchmark is made of fluorescence microscopy images of cell.\nMall: The Mall is a dataset for crowd counting and profiling research. Its images are collected from publicly accessible webcam.\nWinoGrande: WinoGrande is a large-scale dataset of 44k problems, inspired by the original WSC design, but adjusted to improve both the scale and the hardness of the dataset. The key steps of the dataset construction consist of (1) a carefully designed crowdsourcing procedure, followed by (2) systematic bias reduction using a novel AfLite algorithm that generalizes human-detectable word associations to machine-detectable embedding associations.\n\nOptions:- A: Multi Task Crowd B: ShanghaiTech C: Cell D: Mall E: WinoGrande",
"role": "user"
},
{
"content": "[\"ShanghaiTech\", \"Mall\"]",
"role": "assistant"
}
] |
science.covid_deepset_qa
|
science.covid_deepset_qa.549
|
[
{
"content": "You will be shown a paragraph from a biomedical research paper, followed by a question about the paper. Please extract text from the paragraph to answer the question. Do not include any text in your repsonse other than the answer.\n\nParagraph: Since the emergence of the new A(H7N9) virus, more than 400 human cases were reported to the WHO with a mortality rate of more than 35% . Most patients with A(H7N9) infections had contact with poultry or visited live animal markets. However, some sporadic cases seemed to be a result of human to human transmissions [1, 2] . In contrast to pandemic viruses which fulminantly enter the human population and cause high mortality rates, seasonal influenza viruses generally cause uncomplicated and transient infections in humans, with virus replication localized to the upper respiratory tract [3, 4] . However, in its fully developed form influenza is an acute respiratory disease resulting in hospitalizations and deaths mainly among high-risk groups. Worldwide, annual epidemics result in about three to five million cases of severe illness, and about 250,000 to 500,000 deaths [5] . For this reason WHO [6] and CDC [7] recommend antiviral treatment for any patient with suspected influenza who is at risk for influenza complications without previous laboratory confirmation. It is known that influenza virus infections are often accompanied by other viral pathogens [8] . Depending on the detection method (qRT-PCR or immunofluorescence) different ratios of co-infections have been found. Analysis by qRT-PCR revealed that 54.5-83.3% of influenza A or B positive patients were found to have at least one concomitant respiratory viral infection [9] [10] [11] [12] . The detection frequency with immunofluorescence was found to be even higher (90-100%) [13, 14] . Potential concomitant viral pathogens of influenza virus infections include human rhinovirus (hRV), respiratory syncytial virus, adenovirus, human coronavirus, human metapneumovirus and parainfluenza virus [14, 15] . As a result of the multiple infections, a specific anti-influenza mono-therapy treats the influenza virus infection only, but not the infection with the concomitant viral pathogen. Hence, the therapy often fails to sufficiently resolve symptoms. This is also reflected by the fact that neuraminidase inhibitors (NI) are highly efficacious in animal models investigating influenza mono-infections [16, 17] but show lower efficacy against influenza symptoms in clinical trials in adults with natural infections [18] . Therefore, there is a high medical need for a broadly acting antiviral therapy in combination with a specific anti-influenza therapy for treatment of patients suffering from upper respiratory tract symptoms. Ideally, the substances present in the combination complement each other by different modes of action, leading to a treatment that provides full protection against a broad range of different respiratory viruses as well as different influenza strains with a low probability to induce escape mutations. One approach for a broad antiviral therapy is the creation of a protective physical barrier in the nasal cavity using carrageenan. Carrageenan is a high molecular weight sulfated polymer derived from red seaweed (Rhodophyceae) that has been extensively used in food, cosmetic and pharmaceutical industry and is generally recognized as safe by the FDA (GRAS) (reviewed in [19] ). Three main forms of carrageenans are commercially used: kappa, iota and lambda. They differ from each other in the degree of sulfation, solubility and gelling properties [20] . The antiviral mechanism of carrageenan is based on the interference with viral attachment; as a consequence, viral entry is inhibited [21, 22] . Its antiviral activity is dependent on the type of polymer as well as the virus and the host cells [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] and has been reviewed in [33] [34] [35] . We published that iota-carrageenan is a potent inhibitor of hRV [36] and influenza A [37] replication and demonstrated the antiviral efficacy of iota-carrageenan against common cold viruses by intranasal application in several randomized, double-blind, parallel group, placebo-controlled clinical trials [38] [39] [40] . The pooled analysis of two studies conducted in 153 children and 203 adults revealed that patients infected with any respiratory virus, who were intranasally treated with iota-carrageenan showed a 1.9 day faster recovery from common cold symptoms than placebo treated patients in the intention-to-treat population [41, 42] . The anti-influenza activity was shown by subgroup analysis of 49 influenza infected patients who benefited from a 3.3 days faster recovery from symptoms. The use of carrageenan nasal spray was associated with a significant reduction of the influenza viral load in nasal fluids and a significant increase in the number of virus free patients within the treatment period of 7 days. In good accordance Prieschl-Grassauer are co-founders of Marinomed Biotechnologie GmbH. Marinomed Biotechnologie GmbH had a role in study design, data collection and analysis, decision to publish, preparation of the manuscript and is financing the processing charge of the manuscript. with the literature [9] [10] [11] [12] [13] [14] we observed that the majority of influenza virus infected patients suffered from a concomitant respiratory viral infection (66%) as determined by real-time PCR. Carrageenan containing nasal sprays are already marketed for the treatment of respiratory viral infections under different brand names in 18 countries. At present the only available effective drugs for treatment and post exposure prevention of influenza are the NI (Oseltamivir and Zanamivir worldwide; Peramivir in Japan and South Korea). Since the large-scale use of M2 blockers for prophylaxis and treatment in humans [43] and farming [44] , the currently circulating influenza viruses already lack sensitivity to this drug group [45] . We have already shown an additive therapeutic effect of a combination therapy with intranasally applied iota-carrageenan and orally administered Oseltamivir in lethally H1N1 A/PR/ 8/34 infected mice and a treatment start 48 hours post infection (hpi) [37] . Due to these very promising results we further developed the concept of combining carrageenan with an NI therapy. In contrast to Oseltamivir, which needs to be activated by metabolic conversion, Zanamivir is directly applied as active drug and can also be administered intranasally [46] [47] [48] [49] [50] [51] [52] . The potential of an intranasal administration of Zanamivir was investigated by GlaxoSmithKline. In seven clinical challenge trials 66 volunteers were infected with influenza B/Yamagata/16/88 and 213 with influenza A/Texas/36/91 (H1N1). 156 of these participants got intranasally applied Zanamivir at different doses (daily dose levels from 6.4 mg to 96 mg) for prophylaxis or therapy [46, 47, 53, 54] . These challenge trials showed that treatment starting before and up to 36 hours post virus inoculation was associated with prevention of laboratory confirmed influenza and febrile illness as well as a reduction in viral titers, duration of shedding and symptoms. In total, safety data from 1092 patients after intranasal application of Zanamivir were published and no evidence for Zanamivir induced adverse events or increased frequencies of local nasal intolerance in comparison to placebo groups was found [46, 49, 52] . Taken together, the combination of a carrageenan nasal spray that provides broad antiviral activity against upper respiratory infections-including influenza-with Zanamivir, a specific anti-influenza drug, meets the existing medical need to treat multiple viral infections. In the present work we investigate the therapeutic effect of a combination of carrageenan and Zanamivir in-vitro and in an animal model. Kappa-carrageenan and iota-carrageenan were purchased from FMC Biopolymers (Philadelphia, PA). The identity, purity (>95%) of carrageenan subtypes and the molecular weight (>100,000) was confirmed by NMR analysis as described elsewhere [55] and the presence of lambda-carrageenan was below the detection limit of 3%. The dry polymer powders were dissolved in aqua bidest (Fresenius Kabi, Austria) to a final concentration of 2.4 mg/ml iota-and 0.8 mg/ml kappa-carrageenan. This 2x stock solution was sterile filtered through a 0.22 μm filter (PAA, Switzerland) and stored at room temperature until use. For further testing the stock solution was diluted to a mixture containing 1.2 mg/ml iota-carrageenan and 0.4 mg/ml kappa-carrageenan (hereinafter referred to as \"carrageenan\"). Zanamivir was purchased as powder (Haosun Pharma, China) and the identity and purity was confirmed by NMR analysis. Zanamivir was either dissolved in carrageenan or placebo solutions, followed by sterile filtration through a 0.22 μm filter (Sarstedt, Germany). For in-vivo studies all Zanamivir containing solutions were freshly prepared. Madin-Darby canine kidney (MDCK) cells were obtained from the American Type Culture Collection (ATCC, Manassas, VA) and cultivated in a 37°C incubator (Sanyo, Japan; CO 2 : 5%, relative humidity: >95%). MDCK cells were grown in Dulbecco's minimal essential (DMEM) high glucose medium (PAA, Austria) supplemented with 10% fetal bovine serum (FBS; PAA, Austria; heat inactivated). Influenza virus A/Hansa Hamburg/01/09 (H1N1(09)pdm) was kindly provided by Peter Staeheli Department of Virology, University of Freiburg, Germany and previously described in [56] ; A/Teal/Germany/Wv632/05 (H5N1) previously published in [57] (accession numbers CY061882-9) and A/Turkey/Germany/R11/01 (H7N7) (taxonomy ID 278191, accession number AEZ68716) were supplied by courtesy of Martin Beer, Institute of Diagnostic Virology, Friedrich-Loeffler-Institute, Riems, Germany; A/Aichi/2/68 (H3N2) was purchased from the ATCC. All influenza viruses were propagated in MDCK cells at 37°C and 5% CO 2 in influenza medium [Opti-Pro serum free medium (Gibco, Austria) supplemented with 4 mM L-glutamine (PAA, Austria), 1% antibiotic-antimycotic mix (PAA, Austria) and 5 μg/ml trypsin (Sigma Aldrich, Austria)]. To determine the 50% inhibitory concentration (IC 50 ) and the combination effect of carrageenan and Zanamivir, a semi-liquid plaque assay was developed. Into 96 well tissue culture plates 1.7x10 4 MDCK cells/well were seeded and infected at 90% confluence (24-28 hours later). Serial dilutions of carrageenan and Zanamivir were prepared in assay medium (influenza medium without trypsin). For infection, viruses were diluted to an MOI of 0.003 (H1N1(09)pdm and H3N2 Aichi), 0.015 (H5N1) or 0.004 (H7N7), respectively, in assay medium and incubated at room temperature (RT) for 10 min with the serial dilutions of carrageenan and/or Zanamivir, respectively. For evaluation of the combination effect of carrageenan and Zanamivir, viruses were diluted in assay medium containing constant concentrations of either carrageenan or Zanamivir. The other substance was serially diluted and used for virus incubation. Cells were infected in 6 replicates/compound dilution, respectively, and incubated at RT for 45 min before inoculum removal. Cells were further incubated with the respective concentration of the investigated substances present in the overlay [influenza medium with 2.25% Carboxymethylcellulose (CMC, Fluka, Austria)] for 30-42 hours at 37°C. Evolving plaques were evaluated after methanol/acetone cell fixation by immune staining with antibodies either directed against the influenza A nucleoprotein (AbD Serotec, Germany) (for H1N1(09)pdm, H5N1 and H7N7) or the hemagglutinin (AbD Serotec, Germany) (for H3N2). Analysis was done with a HRP labeled detection antibody (Thermo Scientific, Germany) using TMB (Biolegend, Germany) as substrate and a microplate reader at 450 nm. The reduction of detected signal represents a reduction in the number and size of plaques and indicates suppression of viral replication during infection and cultivation. After the immunostaining cells were stained with 0.005% crystal violet solution to assess the condition of the cell layer and the toxicity of the compounds. IC 50 values and standard deviations were calculated for a sigmoidal dose response model using XLfit Excel add-in version 5.3.1.3. All animal experiments were carried out according to the guidelines of the \"European Convention for the Protection of Vertebrate Animals used for Experimental and other Scientific Purposes\" and the Austrian law for animal experiments. All animal experiments were approved by the Veterinary University of Vienna institutional ethics committee and performed under the Austrian Federal Ministry of Science and Research experimental animal license numbers BMWF-68.205/0262-II/3b/2011 and BMWF-68.205/0142-II/3b2012. C57BL/6 mice were purchased from Janvier Labs, France and maintained under standard laboratory conditions in the animal facilities of the Veterinary University of Vienna. For euthanasia and anesthesia asphyxiation through CO 2 was used and all efforts were made to minimize suffering. For infection experiments, 3-5 weeks old female mice were intranasally inoculated with 50 μl influenza virus solution (25 μl/nostril) containing 2.27x10 3 or 1.65x10 3 plaque-forming unit of H1N1(09)pdm or H7N7, respectively. Subsequently, treatment started 24, 48 or 72 hpi, as indicated for the different experiments. Treatment was performed intranasally either with 50 μl therapeutic solution or placebo twice per day for 5 days.\n\nQuestion: What was the mortality rate of influenza a virus subtype h7n9 (avian or bird flu)?",
"role": "user"
},
{
"content": "more than 35%",
"role": "assistant"
}
] |
science.mslr2022_ms2_multidoc_summarization
|
science.mslr2022_ms2_multidoc_summarization.10896
|
[
{
"content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nExpressive writing, self-criticism, and self-reassurance.\nAbstract 1:\nOBJECTIVES Self-criticism and reassurance are important mechanisms for regulating negative emotions but relatively little attention has been paid to interventions aim ed at improving them . DESIGN This study explored the use of an expressive writing task to increase self-reassurance and reduce self-criticism using a r and omized controlled design . METHOD A total of 46 participants wrote either about life goals ( the expressive writing task , n= 23 ) or a control topic ( a review of a recent book or film , n= 23 ) for 15 min , three times within an hour . Measures of self-criticism/self-reassurance , stress , and positive affect were completed at baseline and at 2-week follow-up . The Linguistic Inquiry and Word Count ( LIWC ) was used to analyse the writing of participants in the ' life goals ' condition to identify psychological processes that might differentiate those who improved and those who did not . RESULTS While there were no significant changes in self-reported stress or positive affect , participants writing about life goals decreased in their levels of self-criticism at 2-week follow-up relative to participants writing about control topics . Text analysis showed that experimental participants using words that imply the possibility of doubt or failure , including use of the subjunctive tense ( e.g. , could , would , should ) , were least likely to decrease their self-criticism . CONCLUSION . Expressive writing shows promise as a means by which people may decrease in their self-criticism . Future research should determine whether such experimentally induced changes in self-criticism lead to the improvements in psychological health that is implied by previous cross-sectional research\n\n\nTitle 2:\nA longitudinal experimental study comparing the effectiveness of happiness-enhancing strategies in Anglo Americans and Asian Americans\nAbstract 2:\nGrowing evidence suggests that well-being interventions can be effective . However , it is unclear whether happiness-increasing practice s are equally effective for individuals from different cultural background s. To investigate this question , Anglo Americans and predominantly foreign-born Asian Americans were r and omly assigned to express optimism , convey gratitude , or list their past experiences ( control group ) . Multilevel analyses indicated that participants in the optimism and gratitude conditions reported enhanced life satisfaction relative to those in the control condition . However , Anglo Americans in the treatment conditions demonstrated larger increases in life satisfaction relative to Asian Americans , while both cultural groups in the control condition showed the least improvement . These results are consistent with the idea that the value individualist cultures place on self-improvement and personal agency bolsters the efforts of Anglo Americans to become more satisfied , whereas collectivist cultures ’ de-emphasis of self-focus and individual goals interferes with the efforts of Asian Americans to pursue enhanced well-being\n\n\nTitle 3:\nWorking memory capacity can be improved by expressive writing: a randomized experiment in a Japanese sample.\nAbstract 3:\nThis study examined the influence of expressive writing on working memory capacity ( WMC ) , a component of executive function . Japanese undergraduates ( N=104 ) were individually tested across six separate experimental sessions ( baseline , three writing , and two follow-up sessions at 1 and 5 weeks ) . Participants were r and omly assigned to write about a traumatic experience , their best possible future selves ( BPS ) , or a trivial topic for 20 minutes . WMC tests were completed at baseline and follow-ups . Results indicate that expressive writing about traumatic experience can improve WMC at 5 weeks after writing , but that writing about BPS or trivial topics has no effect on WMC\n\n\nTitle 4:\nThe Delphi list: a criteria list for quality assessment of randomized clinical trials for conducting systematic reviews developed by Delphi consensus.\nAbstract 4:\nMost systematic review s rely substantially on the assessment of the method ological quality of the individual trials . The aim of this study was to obtain consensus among experts about a set of generic core items for quality assessment of r and omized clinical trials ( RCTs ) . The invited participants were experts in the field of quality assessment of RCTs . The initial item pool contained all items from existing criteria lists . Subsequently , we reduced the number of items by using the Delphi consensus technique . Each Delphi round comprised a question naire , an analysis , and a feedback report . The feedback report included staff team decisions made on the basis of the analysis and their justification . A total of 33 international experts agreed to participate , of whom 21 completed all question naires . The initial item pool of 206 items was reduced to 9 items in three Delphi rounds . The final criteria list ( the Delphi list ) was satisfactory to all participants . It is a starting point on the way to a minimum reference st and ard for RCTs on many different research topics . This list is not intended to replace , but rather to be used alongside , existing criteria lists",
"role": "user"
},
{
"content": "These results indicate that the BPS can be considered a valuable Positive Psychology Intervention to improve clients ' wellbeing , and it seems that it might be more effective for older participants and with shorter practice s ( measured as total minutes of practice )",
"role": "assistant"
}
] |
science.data_reco_mcq_sc
|
science.data_reco_mcq_sc.10392
|
[
{
"content": "Given a short query with keyphrases about a research idea in 2018 in the domain of artificial intelligence, recommend the relevant datasets to validate the research idea from options provided. A short description of the datasets is also provided to help you.\n\nQuery: In this paper, we propose an improved FCM algorithm based on morphological reconstruction and membership filtering (FRF\n\nKeyphrases: image segmentation\n\nDataset description:\nTHEODORE: Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes.\nVOT2018: VOT2018 is a dataset for visual object tracking. It consists of 60 challenging videos collected from real-life datasets.\nBSDS500: Berkeley Segmentation Data Set 500 (BSDS500) is a standard benchmark for contour detection. This dataset is designed for evaluating natural edge detection that includes not only object contours but also object interior boundaries and background boundaries.\nHyperLex: A dataset and evaluation resource that quantifies the extent of of the semantic category membership, that is, type-of relation also known as hyponymy-hypernymy or lexical entailment (LE) relation between 2,616 concept pairs.\n\nOptions:- A: THEODORE B: VOT2018 C: BSDS500 D: HyperLex",
"role": "user"
},
{
"content": "BSDS500",
"role": "assistant"
}
] |
science.bioasq_factoid_qa
|
science.bioasq_factoid_qa.1049
|
[
{
"content": "You will be shown biomedical passages and your task is to the answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: Which algorithm has been developed for finding conserved non-coding elements (CNEs)?\n\nContext: CNEFinder: finding conserved non-coding elements in genomes.\nTowards this direction, identifying sets of CNEs in a wide range of organisms is an important first step. Currently, there are no tools published in the literature for systematically identifying CNEs in genomes.Results: We fill this gap by presenting CNEFinder; a tool for identifying CNEs between two given DNA sequences with user-defined criteria.\nResults We fill this gap by presenting CNEFinder; a tool for identifying CNEs between two given DNA sequences with user-defined criteria.",
"role": "user"
},
{
"content": "CNEFinder",
"role": "assistant"
}
] |
science.data_reco_mcq_mc
|
science.data_reco_mcq_mc.979
|
[
{
"content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: I'm going to train a model to perform a set of 12 tasks\n\nKeyphrases: dodecadialogue text\n\nDataset description:\nMegaFace: MegaFace was a publicly available dataset which is used for evaluating the performance of face recognition algorithms with up to a million distractors (i.e., up to a million people who are not in the test set). MegaFace contains 1M images from 690K individuals with unconstrained pose, expression, lighting, and exposure.\nMSRC-12: The Microsoft Research Cambridge-12 Kinect gesture data set consists of sequences of human movements, represented as body-part locations, and the associated gesture to be recognized by the system. The data set includes 594 sequences and 719,359 frames—approximately six hours and 40 minutes—collected from 30 people performing 12 gestures.\nMECCANO: The MECCANO dataset is the first dataset of egocentric videos to study human-object interactions in industrial-like settings. The MECCANO dataset has been acquired in an industrial-like scenario in which subjects built a toy model of a motorbike.\nConvAI2: The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset.\nDailyDialog: DailyDialog is a high-quality multi-turn open-domain English dialog dataset. It contains 13,118 dialogues split into a training set with 11,118 dialogues and validation and test sets with 1000 dialogues each.\nBDD100K: Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving. Researchers are usually constrained to study a small set of problems on one dataset, while real-world computer vision applications require performing tasks of various complexities.\n\nOptions:- A: MegaFace B: MSRC-12 C: MECCANO D: ConvAI2 E: DailyDialog F: BDD100K",
"role": "user"
},
{
"content": "[\"ConvAI2\", \"DailyDialog\"]",
"role": "assistant"
}
] |
science.medmentions_ner
|
science.medmentions_ner.1384
|
[
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"HealthCareActivity\", \"InjuryOrPoisoning\", \"BodySubstance\", \"IntellectualProduct\", \"AnatomicalStructure\", \"SpatialConcept\", \"Chemical\", \"Bacterium\", \"MedicalDevice\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Finding\", \"BiologicFunction\", \"Virus\", \"ResearchActivity\", \"ClinicalAttribute\", \"PopulationGroup\", \"Eukaryote\", \"BodySystem\", \"Food\", \"ProfessionalOrOccupationalGroup\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nThe mechanism and application of the protein -stabilized gold nanocluster sensing system Protein -stabilized gold nanoclusters (protein - Au NCs) have been an attractive frontier of nanoparticle research. Due to their unique fluorescence properties, high stability, environmentally friendly synthetic routes and nontoxicity, protein - Au NCs could find applications in highly sensitive and selective detection of metal ions, inorganic ions and biomolecules in food, soil, water and biological samples. The past few years have witnessed the development of many successful strategies for the preparation of numerous protein - Au NC -based sensing systems. In this review, we focus on a number of sensing systems based on protein - Au NCs for the detection of various analytes of interest with a special emphasis on their corresponding sensing mechanisms.",
"role": "user"
},
{
"content": "{\"AnatomicalStructure\": [], \"Bacterium\": [], \"BiologicFunction\": [], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"gold\", \"Au\", \"protein\", \"water\", \"metal ions\", \"inorganic ions\", \"Protein\"], \"ClinicalAttribute\": [], \"Eukaryote\": [], \"Finding\": [\"detection\", \"sensing\"], \"Food\": [\"food\"], \"HealthCareActivity\": [], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"nanoparticle research\"], \"SpatialConcept\": [], \"Virus\": []}",
"role": "assistant"
}
] |
science.bioasq_yesno_qa
|
science.bioasq_yesno_qa.415
|
[
{
"content": "You have to answer a biomedical question in binary format, i.e. only yes and no are the only acceptable answer formats. A list of paragraphs is provided as context to help you answer the question.\n\nQuestion: Is thrombophilia related to increased risk of miscarriage?\n\nContext: Thrombophilia does hardly increase the risk of IUGR/PMPC or if so, it can be prevented by LMWH\nfor illustrative purposes, a patient presenting with combined thrombophilia--both genetic and acquired--will be discussed. This patient had suffered severe gestational complications that led to devastating obstetrical outcome\nThrombophilias have been implicated in complications related to ischemic placental disease including recurrent pregnancy loss, intrauterine fetal demise, preeclampsia, fetal growth restriction, placental abruption, and preterm delivery\nFurther information about the combined risk of aPC resistance and pregnancy is needed before guidance on the management of affected women can be formulated.\nThrombotic risk during pregnancy and the puerperium is higher in asymptomatic women with than without thrombophilia\nFurther studies are required to assess the thrombotic risk in women with preeclampsia as well as early or late recurrent pregnancy loss.\nRisk of pregnancy-related venous thrombosis in carriers of severe inherited thrombophilia\nIn conclusion, homozygous carriers of factor V Leiden and, to a lesser extent, double heterozygous carriers of factor V Leiden and of the prothrombin mutation have an increased risk of venous thrombosis during pregnancy, particularly high during the postpartum period\nCareful diagnosis, observation and monitoring can add significant benefit to LMWH therapy during pregnancy\nPregnancy in healthy women is accompanied by hypercoagulable changes that may interact with thrombophilia risk factors and threaten pregnancy.\nFifty-three (13 %) women had antiphospholipid antibodies (lupus anticoagulant and/or anti-beta2-glycoprotein 1 antibodies) mainly associated with the risk of spontaneous abortion during the first trimester\nthrombophilia was found to be considerably more common in women with pregnancy-associated complications in comparison with the general population, and most frequently in conjunction with venous thromboembolism during pregnancy and the postpartum period\nWhen counseling white women with a history of preeclampsia, screening for thrombophilia can be useful for preconceptional counseling and pregnancy management.\nknowledge combined with the appropriate use of thromboprophylaxis and treatment in women who have objectively confirmed VTE continue to improve maternal and perinatal outcomes\nThe risk of having thrombophilia is doubled in men who have fathered pregnancies which ended in perinatal death as well as in the mothers of such pregnancies.\nThe prevalence of thrombophilic variants is of possible public health significance for other morbidity; but perhaps not in relation to preeclampsia\nThis study suggests that thrombophilia \"mediates\" in lowering of cardiovascular risk factors in women with a history of preeclampsia",
"role": "user"
},
{
"content": "yes",
"role": "assistant"
}
] |
science.bioasq_general_qa
|
science.bioasq_general_qa.826
|
[
{
"content": "You will be shown a question, followed by excerpts from biomedical research papers. Please answer the question based on the provided context. Do not include any text in your response other than the answer.\n\nQuestion: What is Heterochromia Iridis?\n\nContext: The purpose of this study is to report a new and promising method for changing iris color in a sectorial heterochromia iridis patient. A 22-year-old man with a complaint of innate color disparity between his eyes presented to our clinic to seek medical advice.\nTo present a case of iatrogenic Horner's syndrome seen together with the heterochromia in the post-thyroidectomy period.METHODS: A 23-year-old female patient was admitted to our clinic with complaints of low vision in the eye and difference in eye color that developed over the past two years.\nHeterochromia iridis, asymmetry of iris pigmentation, has been well described with congenital Horner syndrome.\nBACKGROUND Heterochromia iridis, asymmetry of iris pigmentation, has been well described with congenital Horner syndrome.\nBACKGROUND Heterochromia iridis, asymmetry of iris pigmentation, has been well described with congenital Horner syndrome.",
"role": "user"
},
{
"content": "Heterochromia Iridis is a condition where the affected person has differences in the color of the iris. Heterochromia iridis is a rare autosomal recessive disorder of the iris, characterized by a heterochromatic pattern of inheritance, variable expressivity, and partial or total absence of iris and/or blonde hair. Heterochromia Iridis is a rare autosomal dominant disorder of melanocyte development characterized by heterochromatosis of the coronal, sagittal, and lambdoid sutures.",
"role": "assistant"
}
] |
science.qasper_extractive_qa
|
science.qasper_extractive_qa.240.0
|
[
{
"content": "You will be shown sections from a scientific research paper, together with a question about the paper. Your task is to answer the question based on the contents of the paper. Your response should be a `json` object with two fields:\n\n\"answer\": An array of strings extracted directly from the paper which, collectively, answer the question.\n\"evidence\": An array of strings. Each should be an excerpt from the paper, in which one or more of the extracted answers can be found.\n\nFor instance, for the question \"What baselines did the authors compare against?\", a sample response might be:\n\n{\n \"answer\": [\"BERT\", \"RoBERTa\"],\n \"evidence\": [\"In our experiments, we compare the performance of our model against BERT and RoBERTa.\"]\n}\n\nDo not include any text in your response other than the json.\n\nIf the question is unanswerable given the provided excerpts, respond with the single word \"null\".\n\nPaper: Measuring Compositional Generalization: A Comprehensive Method on Realistic Data\n\nState-of-the-art machine learning methods exhibit limited compositional generalization. At the same time, there is a lack of realistic benchmarks that comprehensively measure this ability, which makes it challenging to find and evaluate improvements. We introduce a novel method to systematically construct such benchmarks by maximizing compound divergence while guaranteeing a small atom divergence between train and test sets, and we quantitatively compare this method to other approaches for creating compositional generalization benchmarks. We present a large and realistic natural language question answering dataset that is constructed according to this method, and we use it to analyze the compositional generalization ability of three machine learning architectures. We find that they fail to generalize compositionally and that there is a surprisingly strong negative correlation between compound divergence and accuracy. We also demonstrate how our method can be used to create new compositionality benchmarks on top of the existing SCAN dataset, which confirms these findings.\n\nIntroduction\nHuman intelligence exhibits systematic compositionality BIBREF0, the capacity to understand and produce a potentially infinite number of novel combinations of known components, i.e., to make “infinite use of finite means” BIBREF1. In the context of learning from a set of training examples, we can observe compositionality as compositional generalization, which we take to mean the ability to systematically generalize to composed test examples of a certain distribution after being exposed to the necessary components during training on a different distribution.\nHumans demonstrate this ability in many different domains, such as natural language understanding (NLU) and visual scene understanding. For example, we can learn the meaning of a new word and then apply it to other language contexts. As BIBREF2 put it: “Once a person learns the meaning of a new verb `dax', he or she can immediately understand the meaning of `dax twice' and `sing and dax'.” Similarly, we can learn a new object shape and then understand its compositions with previously learned colors or materials BIBREF3, BIBREF4.\nIn contrast, state-of-the-art machine learning (ML) methods often fail to capture the compositional structure that is underlying the problem domain and thus fail to generalize compositionally BIBREF2, BIBREF5, BIBREF6, BIBREF7, BIBREF3. We believe that part of the reason for this shortcoming is a lack of realistic benchmarks that comprehensively measure this aspect of learning in realistic scenarios.\nAs others have proposed, compositional generalization can be assessed using a train-test split based on observable properties of the examples that intuitively correlate with their underlying compositional structure. BIBREF8, for example, propose to test on different output patterns than are in the train set, while BIBREF2 propose, among others, to split examples by output length or to test on examples containing primitives that are rarely shown during training. In this paper, we formalize and generalize this intuition and make these contributions:\nWe introduce distribution-based compositionality assessment (DBCA), which is a novel method to quantitatively assess the adequacy of a particular dataset split for measuring compositional generalization and to construct splits that are ideally suited for this purpose (Section SECREF2).\nWe present the Compositional Freebase Questions (CFQ) , a simple yet realistic and large NLU dataset that is specifically designed to measure compositional generalization using the DBCA method, and we describe how to construct such a dataset (Section SECREF3).\nWe use the DBCA method to construct a series of experiments for measuring compositionality on CFQ and scan BIBREF2 and to quantitatively compare these experiments to other compositionality experiments (Section SECREF4).\nWe analyze the performance of three baseline ML architectures on these experiments and show that these architectures fail to generalize compositionally, and perhaps more surprisingly, that compound divergence between train and test sets is a good predictor of the test accuracy (Section SECREF5).\n\nDistribution-Based Compositionality Assessment (DBCA) ::: Principles for measuring compositionality\nWe use the term compositionality experiment to mean a particular way of splitting the data into train and test sets with the goal of measuring compositional generalization. Based on the notions of atoms and compounds described above, we say that an ideal compositionality experiment should adhere to the following two principles:\nSimilar atom distribution: All atoms present in the test set are also present in the train set, and the distribution of atoms in the train set is as similar as possible to their distribution in the test set.\nDifferent compound distribution: The distribution of compounds in the train set is as different as possible from the distribution in the test set.\nThe second principle guarantees that the experiment is compositionally challenging in the sense that it tests the learner on compounds that are as different as possible from the compounds used during training. The first principle aims to guarantee that the experiment is exclusively measuring the effect of the difference in the way atoms are composed to form compounds (rather than some related but different property such as domain adaptation on the distribution of the atoms).\nTo determine to which degree a certain experiment adheres to these principles, we use the following formalization. For a sample set $T$, we use $\\mathcal {F}_A(T)$ to denote the frequency distribution of atoms in $T$ and $\\mathcal {F}_C(T)$ for the weighted frequency distribution of compounds in $T$, which correspond to the subgraphs of the rule application DAGs. For practicality, we do not consider all subgraphs of rule application DAGs when computing the compound divergence. Instead, we first generate a large subset $\\mathbb {G}$ of subgraphs, then weight them in context of their occurrence, and keep only the ones with highest sum of weights. The purpose of the weighting is to avoid double-counting compounds that are highly correlated with some of their super-compounds. We achieve this by calculating the weight of $G \\in \\mathbb {G}$ in a sample as $w(G) = \\max _{g \\in \\text{occ}(G)} (1 - \\max _{G^{\\prime }: g \\prec g^{\\prime } \\in \\text{occ}(G^{\\prime })} P(G^{\\prime }| G))$, where $\\text{occ}(G)$ is the set of all occurrences of $G$ in the sample, $\\prec $ denotes the strict subgraph relation, and $P(G^{\\prime }| G)$ is the empirical probability of $G^{\\prime }$ occurring as a supergraph of $G$ over the full sample set. See Appendix SECREF202 for example subgraphs and more details on the weighting.\nWe measure divergence (or similarity) of the weighted distributions using the Chernoff coefficient $C_\\alpha (P \\Vert Q) = \\sum _{k} p_k^\\alpha \\, q_k^{1-\\alpha } \\in [0, 1]$ BIBREF9. For the atom divergence, we use $\\alpha =0.5$, which corresponds to the Bhattacharyya coefficient and reflects the desire of making the atom distributions in train and test as similar as possible. For the compound divergence, we use $\\alpha = 0.1$, which reflects the intuition that it is more important whether a certain compound occurs in $P$ (train) than whether the probabilities in $P$ (train) and $Q$ (test) match exactly. This allows us to formally define as follows the notions of compound divergence $\\mathcal {D}_C$ and atom divergence $\\mathcal {D}_A$ of a compositionality experiment consisting of a train set $V$ and a test set $W$:\nBased on these principles, we suggest to use as a preferred compositionality benchmark for a given dataset the accuracy obtained by a learner on splits with maximum compound divergence and low atom divergence (we use $\\mathcal {D}_A \\le 0.02$). See Section SECREF4 for details about how to construct such splits.\n\nExperimental Results and Analysis ::: Results and analysis for CFQ\nThe mean accuracies of the three architectures on CFQ are shown in Figure FIGREF28(a) and Table TABREF29. We make three main observations:\nAll models achieve an accuracy larger than 95% on a random split, and this is true even if they are trained on 10 times fewer training instances (see Appendix SECREF15 for a more detailed analysis on the performance with varying training size).\nThe mean accuracy on the MCD splits is below 20% for all architectures, which means that even a large train set (about 96k instances) with a similar distribution of atoms between train and test is not sufficient for these architectures to perform well on the test distribution.\nFor all architectures, there is a strong negative correlation between the compound divergence and the mean accuracy.\nThis suggests that the baseline models are able to capture the superficial structure of the dataset, but fail to capture the compositional structure. We find it surprising that varying the compound divergence gives direct control of the (mean) accuracy, even though the examples in train and test look similar (see Appendix SECREF91). This means that compound divergence seems to capture the core difficulty for these ML architectures to generalize compositionally.\nNote that the experiment based on output-length exhibits a worse accuracy than what we would expect based on its compositional divergence. One explanation for this is that the test distribution varies from the training distribution in other ways than compound divergence (namely in output length and a slightly higher atom divergence), which seems to make this split particularly difficult for the baseline architectures. To analyze the influence of the length ratio further, we compute the correlation between length ratios and accuracy of the baseline systems and compare it to the correlation between compound divergence and accuracy. We observe $R^2$ correlation coefficients between 0.11 and 0.22 for the input and output length ratios and between 0.81 and 0.88 for the compound divergence. This shows that despite the known phenomenon that the baseline systems struggle to generalize to longer lengths, the compound divergence seems to be a stronger explanation for the accuracy on different splits than the lengths ratios.\nError analysis. We perform an analysis of the errors for the split MCD$_{1}$ (the first MCD split that we constructed, with more details provided in Appendix SECREF13). We observe accuracies between 29% and 37% on the test set of this particular split. Qualitatively, all three systems seem to make similar errors at this point (68% of errors are on the same samples). They make more errors for longer sequences and predict about 20% too short output when they make an error. The most common category of error is the omission of a clause in the output (present in 43%-49% of the test samples), e.g.: (1) Omitted conjunctions: for the input “What spouse of a film producer executive produced and edited M0, M1, and M2?” the best system ignores “executive produced” in the output. (2) Omitted adjectives: for the input “Which female Spanish film producer was M3' s spouse?” the best system ignores the adjective “female”.\n\nData Distribution Analysis ::: Answer frequencies\nTable TABREF85 shows the most frequently occurring answers in CFQ. Not surprisingly, after the answers “Yes” and “No”, entities related in Freebase to the domain of movies have highest frequency.\n\nRule Format ::: Resolution rule format\nCFQ resolution rules transform sparql expressions and may be conditioned on knowledge. They do not affect text or logical forms.\nIn the rules index, they are described in the following format:\n$K: S_0 \\rightarrow S_1~...~S_n$\nwhere $K$ represents a comma-separated list of knowledge preconditions, $S_0$ is a variable-based expression and $S_1~...~S_n$ are either raw sparql strings or else expressions described in terms of the same variables used in $S_0$ and $K$.\nThese rules are interpreted as stating that if there exists a variable replacement $r()$ replacing $v_1,...,v_m$ with some logical forms, strings, or expressions $l_1,...,l_m$ respectively, such that $r(K) \\subseteq KB^{CFQ}$, then we can apply the resolution rule by rewriting $r(S_0)$ to the sequence of terms $r(S_1)~...~r(S_n)$.\n\nExample of a rule application DAG ::: Entity placeholders\nAs described in Section SECREF16, during generation we initially generate a $\\langle \\text{question, logical form, \\textsc {sparql}{} query} \\rangle $ triple containing entity placeholders, and then replace those placeholders with specific entities as a post-processing step. Conceptually, one could construct a rule application DAG describing either the process by which the original $\\langle \\text{question, logical form, \\textsc {sparql}{} query} \\rangle $ triple with entity placeholders was generated, or alternatively the rules that would need to be applied if constructing the $\\langle \\text{question, logical form, \\textsc {sparql}{} query} \\rangle $ triple containing the final entity MIDs directly. Structurally, these two DAGs are identical, differing only in the definition of two entity-related rules described below. The rule application DAG shown in the accompanying figures is the version using entity placeholders.\nVersions of entity rules applicable when using entity placeholders:\nENTITY=[ENTITY]_HSz7QrdGdsX:\nENTITY(number:singular)/Entity(new_var(V1))\n$\\rightarrow $ '[entity]'\nENTITY_MID:\nent2sparql(Entity($X)) $\\rightarrow $ $X\nVersions of entity rules applicable when using actual entity MIDs:\nENTITY=[ENTITY]_HSz7QrdGdsX:\nENTITY(number:singular)/'m.'$X\n$\\rightarrow $ 'm.'$X\nENTITY_MID:\nent2sparql('m.'$X) $\\rightarrow $ 'ns:m.'$X\n\nTable 3: Comparison of relevant measurements for different split methods on CFQ / SCAN.\n\nFigure 11: Accuracies of the three baseline systems on SCAN at different divergence levels as a function of training size.\n\nFigure 13: The normalized rule application DAG that was produced for “Who directed [entity]?” (resolution rules portion, continued from Figure 12).\n\nQuestion: How strong is negative correlation between compound divergence and accuracy in performed experiment?",
"role": "user"
},
{
"content": "{\"answer\": [\" between 0.81 and 0.88\"], \"evidence\": [\"Note that the experiment based on output-length exhibits a worse accuracy than what we would expect based on its compositional divergence. One explanation for this is that the test distribution varies from the training distribution in other ways than compound divergence (namely in output length and a slightly higher atom divergence), which seems to make this split particularly difficult for the baseline architectures. To analyze the influence of the length ratio further, we compute the correlation between length ratios and accuracy of the baseline systems and compare it to the correlation between compound divergence and accuracy. We observe $R^2$ correlation coefficients between 0.11 and 0.22 for the input and output length ratios and between 0.81 and 0.88 for the compound divergence. This shows that despite the known phenomenon that the baseline systems struggle to generalize to longer lengths, the compound divergence seems to be a stronger explanation for the accuracy on different splits than the lengths ratios.\"]}",
"role": "assistant"
}
] |
science.medmentions_ner
|
science.medmentions_ner.950
|
[
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"HealthCareActivity\", \"InjuryOrPoisoning\", \"BodySubstance\", \"IntellectualProduct\", \"AnatomicalStructure\", \"SpatialConcept\", \"Chemical\", \"Bacterium\", \"MedicalDevice\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Finding\", \"BiologicFunction\", \"Virus\", \"ResearchActivity\", \"ClinicalAttribute\", \"PopulationGroup\", \"Eukaryote\", \"BodySystem\", \"Food\", \"ProfessionalOrOccupationalGroup\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nMultifaceted Functions of NOD-Like Receptor Proteins in Myeloid Cells at the Intersection of Innate and Adaptive Immunity NOD-like receptor (NLR) proteins, as much as Toll-like receptor proteins, play a major role in modulating myeloid cells in their immune functions. There is still, however, limited knowledge on the expression and function of several of the mammalian NLR proteins in myeloid lineages. Still, the function of pyrin domain -containing NLR proteins and NLRC4 / NAIP as inflammasome components that drive interleukin-1β (IL-1β) and IL-18 maturation and secretion upon pathogen stimulation is well established. NOD1, NOD2, NLRP3, and NLRC4 / NAIP act as bona fide pattern recognition receptors (PRRs) that sense microbe-associated molecular patterns (MAMPs) but also react to endogenous danger-associated molecular patterns (DAMPs). Ultimately, activation of these receptors achieves macrophage activation and maturation of dendritic cells to drive antigen-specific adaptive immune responses. Upon infection, sensing of invading pathogens and likely of DAMPs that are released in response to tissue injury is a process that involves multiple PRRs in both myeloid and epithelial cells, and these act in concert to design tailored, pathogen - adapted immune responses by induction of different cytokine profiles, giving rise to appropriate lymphocyte polarization.",
"role": "user"
},
{
"content": "{\"AnatomicalStructure\": [\"myeloid lineages\", \"myeloid\", \"myeloid cells\", \"dendritic cells\", \"Myeloid Cells\", \"epithelial cells\"], \"Bacterium\": [], \"BiologicFunction\": [\"Functions\", \"adaptive immune responses\", \"infection\", \"Adaptive Immunity\", \"immune functions\", \"lymphocyte polarization\", \"secretion\", \"expression\", \"adapted immune responses\", \"maturation\", \"macrophage activation\", \"activation of these receptors\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"NLRC4\", \"IL-18\", \"DAMPs\", \"NLR proteins\", \"pattern recognition receptors\", \"NOD1\", \"Toll-like receptor proteins\", \"IL-1\\u03b2\", \"antigen-specific\", \"inflammasome\", \"NOD2\", \"PRRs\", \"danger-associated molecular patterns\", \"NOD-like receptor (NLR) proteins\", \"NAIP\", \"interleukin-1\\u03b2\", \"NOD-Like Receptor Proteins\", \"NLRP3\"], \"ClinicalAttribute\": [], \"Eukaryote\": [\"mammalian\"], \"Finding\": [], \"Food\": [], \"HealthCareActivity\": [], \"InjuryOrPoisoning\": [\"tissue injury\"], \"IntellectualProduct\": [], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [], \"SpatialConcept\": [\"MAMPs\", \"pyrin domain\", \"microbe-associated molecular patterns\"], \"Virus\": []}",
"role": "assistant"
}
] |
science.data_reco_mcq_mc
|
science.data_reco_mcq_mc.1760
|
[
{
"content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: I'm interested in training a model that predicts the pose of a face in an image.\n\nKeyphrases: facial image analysis images\n\nDataset description:\nLFPW: The Labeled Face Parts in-the-Wild (LFPW) consists of 1,432 faces from images downloaded from the web using simple text queries on sites such as google.com, flickr.com, and yahoo.com. Each image was labeled by three MTurk workers, and 29 fiducial points, shown below, are included in dataset.\nMegaFace: MegaFace was a publicly available dataset which is used for evaluating the performance of face recognition algorithms with up to a million distractors (i.e., up to a million people who are not in the test set). MegaFace contains 1M images from 690K individuals with unconstrained pose, expression, lighting, and exposure.\nWFLW: The Wider Facial Landmarks in the Wild or WFLW database contains 10000 faces (7500 for training and 2500 for testing) with 98 annotated landmarks. This database also features rich attribute annotations in terms of occlusion, head pose, make-up, illumination, blur and expressions.\nMulti-PIE: The Multi-PIE (Multi Pose, Illumination, Expressions) dataset consists of face images of 337 subjects taken under different pose, illumination and expressions. The pose range contains 15 discrete views, capturing a face profile-to-profile.\nUMDFaces: UMDFaces is a face dataset divided into two parts: Still Images - 367,888 face annotations for 8,277 subjects. Video Frames - Over 3.7 million annotated video frames from over 22,000 videos of 3100 subjects.\nAFLW: The Annotated Facial Landmarks in the Wild (AFLW) is a large-scale collection of annotated face images gathered from Flickr, exhibiting a large variety in appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total about 25K faces are annotated with up to 21 landmarks per image.\n\nOptions:- A: LFPW B: MegaFace C: WFLW D: Multi-PIE E: UMDFaces F: AFLW",
"role": "user"
},
{
"content": "[\"LFPW\", \"AFLW\"]",
"role": "assistant"
}
] |
science.qasa_abstractive_qa
|
science.qasa_abstractive_qa.1213
|
[
{
"content": "You will be shown an excerpt from a computer science scientific research paper, followed by a question about the paper. Please answer the question. Do not include any text in your response other than the answer.\n\nContext: We test the performance of GraphSAGE on three benchmark tasks: (i) classifying academic papers into different subjects using the Web of Sciencecitation dataset, (ii) classifying Reddit posts as belonging to different communities, and (iii) classifying protein functions across various biological protein-protein interaction (PPI) graphs. Sections 4.1 and 4.2 summarize the datasets, and the supplementary material contains additional information.In all these experiments, we perform predictions on nodes that are not seen during training, and, in the case of the PPI dataset, we test on entirely unseen graphs.\n\nQuestion: Is 'perform entirely unseen graphs in PPI' means test the proposed model on new PPI graphs?",
"role": "user"
},
{
"content": "Yes it is. The reason is that authors test on new graphs that is not shown in training.",
"role": "assistant"
}
] |
science.pubmedqa_qa
|
science.pubmedqa_qa.142
|
[
{
"content": "You will be shown an abstract from a research paper, and a corresponding research question. Determine the answer to the research question based on the context provided. Your answer should be one of: \"YES\", \"NO\", or \"MAYBE\".\n\nContext:\n(OBJECTIVES) To compare the results between a sliding compression hip screw and an intramedullary nail in the treatment of pertrochanteric fractures.\n(DESIGN) Prospective computer-generated randomization of 206 patients into two study groups: those treated by sliding compression hip screw (Group 1; n = 106) and those treated by intramedullary nailing (Group 2; n = 100).\n(SETTING) University Level I trauma center.\n(PATIENTS) All patients over the age of fifty-five years presenting with fractures of the trochanteric region caused by a low-energy injury, classified as AO/OTA Type 31-A1 and A2.\n(INTERVENTION) Treatment with a sliding compression hip screw (Dynamic Hip Screw; Synthes-Stratec, Oberdorf, Switzerland) or an intramedullary nail (Proximal Femoral Nail; Synthes-Stratec, Oberdorf, Switzerland).\n(MAIN OUTCOME MEASUREMENTS) Intraoperative: operative and fluoroscopy times, the difficulty of the operation, intraoperative complications, and blood loss. Radiologic: fracture healing and failure of fixation. Clinical: pain, social functioning score, and mobility score.\n(RESULTS) The minimum follow-up was one year. We did not find any statistically significant difference, intraoperatively, radiologically, or clinically, between the two groups of patients.\n\nQuestion:\nPertrochanteric fractures: is there an advantage to an intramedullary nail?\n\nAnswer:",
"role": "user"
},
{
"content": "NO",
"role": "assistant"
}
] |
science.scitldr_aic
|
science.scitldr_aic.1574
|
[
{
"content": "Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.\n\nPaper text:\n\nEmotion is playing a great role in our daily lives.\nThe necessity and importance of an automatic Emotion recognition system is getting increased.\nTraditional approaches of emotion recognition are based on facial images, measurements of heart rates, blood pressure, temperatures, tones of voice/speech, etc.\nHowever, these features can potentially be changed to fake features.\nSo to detect hidden and real features that is not controlled by the person are data measured from brain signals.\nThere are various ways of measuring brain waves: EEG, MEG, FMRI, etc.\nOn the bases of cost effectiveness and performance trade-offs, EEG is chosen for emotion recognition in this work.\nThe main aim of this study is to detect emotion based on EEG signal analysis recorded from brain in response to visual stimuli.\nThe approaches used were the selected visual stimuli were presented to 11 healthy target subjects and EEG signal were recorded in controlled situation to minimize artefacts (muscle or/and eye movements). \nThe signals were filtered and type of frequency band was computed and detected.\nThe proposed method predicts an emotion type (positive/negative) in response to the presented stimuli.\nFinally, the performance of the proposed approach was tested.\nThe average accuracy of machine learning algorithms (i.e. J48, Bayes Net, Adaboost and Random Forest) are 78.86, 74.76, 77.82 and 82.46 respectively. \nIn this study, we also applied EEG applications in the context of neuro-marketing.\nThe results empirically demonstrated detection of the favourite colour preference of customers in response to the logo colour of an organization or Service.",
"role": "user"
},
{
"content": "This paper presents EEG based emotion detection of a person towards an image stimuli and its applicability on neuromarketing.",
"role": "assistant"
}
] |
science.anat_em_ner
|
science.anat_em_ner.469
|
[
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"AnatomicalEntity\"].\n\nPlease return the output as a JSON object of the format: {\"AnatomicalEntity\": [\"tumor cells\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nSystematic proteomic analysis of human hepotacellular carcinoma cells reveals molecular pathways and networks involved in metastasis. \nSystematic proteomic studying of the mechanism of hepatocellular carcinoma (HCC) metastasis remains challenging. We performed comparative proteomic and pathway analysis of four human metastatic HCC cell lines to identify metastasis-associated proteins. These HCC cell lines had a similar genetic background but with an increasing potential of metastasis. Using a combination of two dimensional electrophoresis (2-DE) and MALDI-TOF mass spectrometry, a total of 125 proteins and their post-translational modification forms or isoforms were found to be differentially expressed in the cell lines. Among them, 29 were gradually up-regulated whereas 17 were down-regulated with increasing metastatic potential. Instead of a traditional single-gene readout, global bioinformatics analysis was carried out, which revealed that the glycolysis pathway was the most significantly enriched pathway. The heat shock proteins (HSPs) centered and NF-kappaB centered networks were also enriched in the result, which may imply the key function of inflaming on metastasis. Meanwhile, knockdown of HDGF, an up-regulated protein and a target of NF-kappaB, induced cell apoptosis in the metastatic HCC cells. This work provides a demonstration that a combination of bioinformatics and comparative proteomics can help in finding out potential biomarkers associated with HCC metastasis on the level of pathways.",
"role": "user"
},
{
"content": "{\"AnatomicalEntity\": [\"HCC cell lines\", \"metastatic HCC cells\", \"cell\", \"hepatocellular carcinoma\", \"cell lines\", \"HCC\", \"metastatic HCC cell lines\", \"hepotacellular carcinoma cells\"]}",
"role": "assistant"
}
] |
science.qasper_extractive_qa
|
science.qasper_extractive_qa.296.1
|
[
{
"content": "You will be shown sections from a scientific research paper, together with a question about the paper. Your task is to answer the question based on the contents of the paper. Your response should be a `json` object with two fields:\n\n\"answer\": An array of strings extracted directly from the paper which, collectively, answer the question.\n\"evidence\": An array of strings. Each should be an excerpt from the paper, in which one or more of the extracted answers can be found.\n\nFor instance, for the question \"What baselines did the authors compare against?\", a sample response might be:\n\n{\n \"answer\": [\"BERT\", \"RoBERTa\"],\n \"evidence\": [\"In our experiments, we compare the performance of our model against BERT and RoBERTa.\"]\n}\n\nDo not include any text in your response other than the json.\n\nIf the question is unanswerable given the provided excerpts, respond with the single word \"null\".\n\nPaper: Doc2Vec on the PubMed corpus: study of a new approach to generate related articles\n\nPubMed is the biggest and most used bibliographic database worldwide, hosting more than 26M biomedical publications. One of its useful features is the \"similar articles\" section, allowing the end-user to find scientific articles linked to the consulted document in term of context. The aim of this study is to analyze whether it is possible to replace the statistic model PubMed Related Articles (pmra) with a document embedding method. Doc2Vec algorithm was used to train models allowing to vectorize documents. Six of its parameters were optimised by following a grid-search strategy to train more than 1,900 models. Parameters combination leading to the best accuracy was used to train models on abstracts from the PubMed database. Four evaluations tasks were defined to determine what does or does not influence the proximity between documents for both Doc2Vec and pmra. The two different Doc2Vec architectures have different abilities to link documents about a common context. The terminological indexing, words and stems contents of linked documents are highly similar between pmra and Doc2Vec PV-DBOW architecture. These algorithms are also more likely to bring closer documents having a similar size. In contrary, the manual evaluation shows much better results for the pmra algorithm. While the pmra algorithm links documents by explicitly using terminological indexing in its formula, Doc2Vec does not need a prior indexing. It can infer relations between documents sharing a similar indexing, without any knowledge about them, particularly regarding the PV-DBOW architecture. In contrary, the human evaluation, without any clear agreement between evaluators, implies future studies to better understand this difference between PV-DBOW and pmra algorithm.\n\nBackground ::: PubMed\nPubMed is the largest database of bio-medical articles worldwide with more than 29,000,000 freely available abstracts. Each article is identified by an unique PubMed IDentifier (PMID) and is indexed with the Medical Subject Headings (MeSH) terminology. In order to facilitate the Information Retrieval (IR) process for the end-user, PubMed launched in 2007 a service of related articles search, available both through its Graphical User Interface (GUI) and its Application Programming Interface (API). Regarding the GUI, while the user is reading a publication, a panel presents title of articles that may be linked to the current reading. For the API, the user must query eLink with a given PMID BIBREF0. The output will be a list of others PMIDs, each associated with the similarity score computed by the pmra (pubmed related article) model BIBREF1.\n\nBackground ::: The pmra model\nTo do so, each document is tokenized into many topics $S_{i}$. Then, the probability $P(C|D)$ that the user will find relevant the document C when reading the document D will be calculated. For this purpose, the authors brought the concept of eliteness. Briefly, a topic $S_{i}$ is presented as elite topic for a given document if a word $W_{i}$ representing $S_{i}$ is used with a high frequency in this document. This work allows to bring closer documents sharing a maximum of elite topics. In the article presenting the pmra model, authors claim that “the deployed algorithm in PubMed also takes advantage of MeSH terms, which we do not discuss here”. We can thus assume that a similar score is computed thanks to the associated MeSH terms with both documents D and C. Such an indexing is highly time-consuming and has to be manually performed.\n\nBackground ::: Documents embedding\nNowadays, embedding models allow to represent a text into a vector of fixed dimensions. The primary purpose of this mathematical representation of documents was to be able to use texts as input of deep neural networks. However, these models have been used by the IR community as well: once all fitted in the same multidimensional space, the cosine distance between two documents vectors can estimate the proximity between these two texts. In 2013, Mikolov et al. released a word embedding method called Word2Vec (W2V) BIBREF2. Briefly, this algorithm uses unsupervised learning to train a model which embeds a word as a vector while preserving its semantic meaning. Following this work, Mikolov and Le released in 2014 a method to vectorize complete texts BIBREF3. This algorithm, called Doc2Vec (D2V), is highly similar to W2V and comes with two architectures. The Distributed Memory Model of Paragraph Vectors (PV-DM) first trains a W2V model. This word embedding will be common for all texts from a given corpus C on which it was trained. Then, each document $D_{x}$ from C will be assigned to a randomly initialised vector of fixed length, which will be concatenated with vectors of words composing $D_{x}$ during the training time (words and documents vectors are sharing the same number of dimensions). This concatenation will be used by a final classifier to predict the next token of a randomly selected window of words. The accuracy of this task can be calculated and used to compute a loss function, used to back-propagate errors to the model, which leads to a modification of the document’s representation. The Distributed Bag of Words version of Paragraph Vector (PV-DBOW) is highly similar to the PV-DM, the main difference being the goal of the final classifier. Instead of concatenating vector from the document with word vectors, the goal here is to output words from this window just by using the mathematical representation of the document.\n\nMethods ::: Material\nDuring this study, the optimisation of the model’s parameters and one of the evaluation tasks require associated MeSH terms with the abstracts from PubMed. Briefly, the MeSH is a medical terminology, used to index documents on PubMed to perform keywords-based queries. The MEDOC program was used to create a MySQL database filled with 26,345,267 articles from the PubMed bulk downloads on October 2018, 5th BIBREF13. Then, 16,048,372 articles having both an abstract and at least one associated MeSH term were selected for this study. For each, the PMID, title, abstract and MeSH terms were extracted. The titles and abstracts were lowered, tokenized and concatenated to compose the PubMed documents corpus.\n\nMethods ::: Optimisation\nAmong all available parameters to tune the D2V algorithm released by Gensim, six of them were selected for optimisation BIBREF14. The window_size parameter affects the size of the sliding window used to parse texts. The alpha parameter represents the learning rate of the network. The sample setting allows the model to reduce the importance given to high-frequency words. The dm parameter defines the training used architecture (PV-DM or PV-DBOW). The hs option defines whether hierarchical softmax or negative sampling is used during the training. Finally, the vector_size parameter affects the number of dimensions composing the resulting vector.\nA list of possible values was defined for each of these six parameters. The full amount of possible combinations of these parameters were sent to slave nodes on a cluster, each node training a D2V model with a unique combination of parameters on 85% of 100,000 documents randomly selected from the corpus. Every article from the remaining 15% were then sent to each trained model and queried for the top-ten closest articles. For each model, a final accuracy score represented by the average of common MeSH terms percentage between each document $D_{i}$ from the 15,000 extracted texts and their returning top-ten closest documents was calculated. The combination of parameters with the highest score was kept for both PV-DBOW and PV-DM.\n\nMethods ::: Training\nThe final models were trained on a server powered by four XEON E7 (144 threads) and 1To of RAM. Among the total corpus (16,048,372 documents), 1% (160,482) was extracted as a test set (named TeS) and was discarded from the training. The final models were trained on 15,887,890 documents representing the training set called TrS.\n\nMethods ::: Evaluation\nThe goal here being to assess if D2V could effectively replace the related-document function on PubMed, five different document similarity evaluations were designed as seen on figure FIGREF9. These tasks were designed to cover every similarities, from the most general (the context) to the character-level similarity.\nIndeed, a reliable algorithm to find related documents should be able to bring closer texts sharing either a similar context, some important ideas (stems of words), an amount of non-stemmed vocabulary (e.g. verbs tenses are taken in account) and should not be based on raw character-similarity (two documents sharing the same proportion of letter “A” or having a similar length should not be brought together if they do not exhibit upper levels similarity).\n\nMethods ::: Evaluation ::: String length\nTo assess whether a similar length could lead to convergence of two documents, the size of the query document $D_{x}$ has been compared with the top-close document $C_{x}$ for 10,000 document randomly selected from the TeS after some pre-processing steps (stopwords and spaces were removed from both documents).\n\nMethods ::: Evaluation ::: Words co-occurrences\nA matrix of words co-occurrence was constructed on the total corpus from PubMed. Briefly, each document was lowered and tokenized. A matrix was filled with the number of times that two words co-occur in a single document. Then, for 5,000 documents $D_{x}$ from the TeS, all models were queried for the top-close document $C_{x}$. All possible combinations between all words $WD_{x} \\in D_{x}$ and all words $WC_{x} \\in C_{x}$ (excluding stopwords) were extracted, 500 couples were randomly selected and the number of times each of them was co-occurring was extracted from the matrix. The average value of this list was calculated, reflecting the proximity between D and C regarding their words content. This score was also calculated between each $D_{x}$ and the top-close document $C_{x}$ returned by the pmra algorithm.\n\nMethods ::: Evaluation ::: MeSH similarity\nIt is possible to compare the ability of both pmra and D2V to bring closer articles which were indexed with common labels. To do so, 5,000 documents $D_{x}$ randomly selected from the TeS were sent to both pmra and D2V architectures, and the top-five closer articles $C_{x}$ were extracted. The following rules were then applied to each MeSH found associated with $D_{x}$ for each document $C_{x_i}$ : add 1 to the score if this MeSH term is found in both $D_{x}$ and $C_{x_i}$, add 3 if this MeSH is defined as major topic and add 1 for each qualifier in common between $D_{x}$ and Cxi regarding this particular MeSH term. Then, the mean of these five scores was calculated for both pmra and D2V.\n\nMethods ::: Evaluation ::: Manual evaluation\nAmong all documents contained in the TeS, 10 articles $D_{x}$ have been randomly selected. All of them were sent to the pmra and to the most accurate of the two D2V architectures, regarding the automatic evaluations explained above. Each model was then queried for the ten closest articles for each $D_{x_i} \\in D_{x}$ and the relevance between $D_{x_i}$ and every of the top-ten documents was blindly assessed by a three-modality scale used in other standard Information Retrieval test sets: bad (0), partial (1) or full relevance (2) BIBREF15. In addition, evaluators have been asked to rank publications according their relevant proximity with the query, the first being the closest from their perspective. Two medical doctors and two medical data librarians took part in this evaluation.\n\nResults ::: Evaluation ::: String length\nBy looking at the length difference in term of characters between documents brought closer by D2V, a difference is visible between the two architectures (Figure FIGREF19C). In fact, while a very low correlation is visible under the PV-DM architecture (coefficient $-2.6e10^{-5}$) and under the pmra model ($-5.4e10^{-5}$), a stronger negative one is observed between the cosine distance computed by the PV-DBOW for two documents and their difference in terms of length (coefficient $-1.1e10^{-4}$). This correlation suggests that two documents having a similar size are more likely to be closer in the vectorial space created by the PV-DBOW (cosine distance closer to 1).\n\nResults ::: Evaluation ::: MeSH similarity\nBy studying the common MeSH labels between two close documents, it is possible to assess whether the context influence or not this proximity. By looking at the figure FIGREF23A, we can see that PV-DBOW and pmra are very close in term of MeSH score, indicating that they bring closer documents sharing a similar number of common MeSH labels in average. The pmra model seems to be more likely to output documents sharing a higher MeSH score (the distribution tail going further 4 with a mean equal to 1.58, standard deviation: 1.06), while the PV-DM brings closer documents that are less likely to share an important number of MeSH terms, with a majority of score between 0 and 1 (mean equal to 1.16, standard deviation: 0.73). The figure FIGREF23B shows the correlation between the MeSH score for documents returned by the pmra and those returned by both PV-DM and PV-DBOW models. The PV-DBOW algorithm looks way closer to the pmra in terms of common MeSH labels between two close documents with a slope of 1.0064. The PV-DM model is much less correlated, with a slope of 0.1633, indicating less MeSH in common for close articles.\n\nFigure 1. Ranking of the five designed documents similarity evaluation tasks.\n\nFigure 2. Analysis of stems, words and length differences between texts broughts closer by D2V and pmra. Correlation plot between the stems co-occurrence score (A), words co-occurrence score (B), length difference (C) and scores returned by two D2V architectures (PV-DBOW, blue and PV-DM, orange) or the pmra model (green, normalized values). Outliers with z-score ¿ 3 were discarded from the plot.\n\nFigure 3. Study of both pmra and D2V models regarding their ability to bring closer documents sharing many MeSH labels. A (upper panel): frequency of the different MeSH scores for the pmra, PV-DM and PV-DBOW models. PV-DBOW and pmra are centred on the same value and have a similar distribution, indicating a common ability to link documents regarding their topic. However, the PV-DM algorithm looks less efficient. B (lower panel): correlation between MeSH scores calculated from the pmra and those from D2V. The slopes of the trend lines support the precedent result with a slope close to 1 for PV-DBOW while the PV-DM only reach 0.1, indicating a weaker correlation. Outliers with z-score ¿ 3 were discarded from the plot.\n\nFigure 4. Global agreement between four evaluators rating the accuracy of the D2V and pmra models. Colour scale indicates the strength of the agreement between two annotators. It ranges from 0.49 between the two medical doctors SJD and JPL to 0.61 between the two medical data librarian EL and GK.\n\nQuestion: What Doc2Vec architectures other than PV-DBOW have been tried?",
"role": "user"
},
{
"content": "{\"answer\": [\"PV-DM\"], \"evidence\": [\"Among all available parameters to tune the D2V algorithm released by Gensim, six of them were selected for optimisation BIBREF14. The window_size parameter affects the size of the sliding window used to parse texts. The alpha parameter represents the learning rate of the network. The sample setting allows the model to reduce the importance given to high-frequency words. The dm parameter defines the training used architecture (PV-DM or PV-DBOW). The hs option defines whether hierarchical softmax or negative sampling is used during the training. Finally, the vector_size parameter affects the number of dimensions composing the resulting vector.\"]}",
"role": "assistant"
}
] |
science.bioasq_factoid_qa
|
science.bioasq_factoid_qa.227
|
[
{
"content": "You will be shown biomedical passages and your task is to the answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: What does polyadenylate-binding protein 4 (PABP4) bind to?\n\nContext: RNA binding proteome\nIn vertebrates, poly(A) binding protein (PABP) is known to exist in five different isoforms.\nCytoplasmic poly(A)-binding proteins (PABPs) regulate mRNA stability and translation.\npoly(A) binding proteins 1 and 4 (PABPC1 and PABPC4)\nCytoplasmic poly(A) binding protein 4 (PABPC4) is an RNA-processing protein that plays an important role in the regulation of gene expression.\nCytoplasmic poly(A) binding protein 4\npoly(A)-binding protein (PABP)\nPoly(A) binding protein (PABP) binds mRNA poly(A) tails\nUsing mass spectrometry sequencing we identified poly(A) binding proteins-1 and -4 (PABP1 and PABP4)\nThe poly(A)-binding protein (PABP) is an important translation initiation factor that binds to the polyadenylated 3' end of mRNA.\nA family of cytoplasmic poly(A)-binding proteins (PABPs) bind the poly(A) tail and can regulate mRNA translation and stability.\nHuman PABP binds AU-rich RNA via RNA-binding domains 3 and 4.",
"role": "user"
},
{
"content": "PABP4 binds mRNA poly(A) tails.",
"role": "assistant"
}
] |
science.bioasq_yesno_qa
|
science.bioasq_yesno_qa.1111
|
[
{
"content": "You have to answer a biomedical question in binary format, i.e. only yes and no are the only acceptable answer formats. A list of paragraphs is provided as context to help you answer the question.\n\nQuestion: Is acupotomy used to treat muscle stiffness?\n\nContext: All the included studies reviewed musculoskeletal disorders and reported a significantly higher total effective and cure rates in the acupotomy group for frozen shoulder, cervical spondylosis, third lumbar vertebrae transverse process syndrome, trigger finger, knee osteoarthritis, and lumbar spinal stenosis, compared to the other active control groups\nAcupotomy showed promising results for some musculoskeletal disorders; however, additional high-quality evidence is required to make clinical recommendations regarding this procedure.\nAcupotomy has been widely used to treat nerve entrapment syndrome\nTo evaluate the clinical efficacy and safety of acupotomy in treatment of knee osteoarthritis\nEffect and safety of acupotomy in treatment of knee osteoarthritis\nAcupotomy Therapy for Knee Osteoarthritis Pain: Systematic Review and Meta-Analysis.\nWe included only randomized controlled trials (RCTs) that used acupotomy therapy as the major intervention in adults with knee OA,\nAcupotomy, which involves the addition of a scalpel function to the conventional acupuncture treatment, has recently been applied as a conservative treatment method for lumbar disc herniation (LDH)\nLTS: The systematic review will provide high-quality evidence to assess the efficacy and safety of acupotomy for KOA by pain, stiffness, and dysfunction of knee joint, and quality of life, as well as adverse events.CONC\nAcupotomy has been widely used to treat KOA.\nAcupotomy, a biomechanical therapy guided by traditional Chinese medicine theory, alleviates cartilage degradation and is widely used in the clinic to treat KOA by correcting abnormal mechanics.\nBACKGROUND: Acupotomy has been widely used to treat nerve entrapment syndrome.\nAcupotomy has been widely used to treat calcaneodynia.\nThe aim of this study is to evaluate the efficacy and safety of the acupotomy treatment in patients with calcaneodynia.\notomy combined with rehabilitation was associated with significantly higher TER (RR 1.24, 95% CI 1.01-1.52, I = 77%) and gross motor function measure score (MD 12.62, 95% CI 11.75-13.49, I = 54%), and significantly lower muscle tone of gastrocnemius measured by the Ashworth scale or the modified Ashworth scale (MD -0.97, 95% CI -1.07 to -0.88, I = 0%) compared with rehabilitation alone. No\nBoth acupotomy and acupuncture have been widely used clinically to treat CSR in China with satisfied efficacy.\nGN AND METHODS: Total 75 patients were participated in acupotomy therapy and ultrasonic drug penetration to treat joint osteoarthritis. The\nAcupotomy Alleviates Energy Crisis at Rat Myofascial Trigger Points\nits in the acupotomy and EA groups underwent bilateral acupotomylysis intervention; those in the acupotomy-EA group underwent acupotomylysis and EA interventions. On the\nLTS: The systematic review will provide high-quality evidence to assess the efficacy and safety of acupotomy for KOA by pain, stiffness, and dysfunction of knee joint, and quality of life, as well as adverse events.CON\nAcupotomy for knee osteoarthritis: A systematic review protocol.\nThe aim of this study is to evaluate the efficacy and safety of acupotomy for the treatment of patients with KOA\nSION: The systematic review will provide evidence to assess the effectiveness and safety of acupotomy therapy for KOA patients.PROSP\ne systematic review will provide high-quality evidence to assess the efficacy and safety of acupotomy for KOA by pain, stiffness, and dysfunction of knee joint, and quality of life, as well as adverse events.CON\nThe aim of this study is to evaluate the efficacy and safety of acupotomy for the treatment of patients with KOA.M\nsystematic review will provide evidence to assess the effectiveness and safety of acupotomy therapy for KOA patients.PROS\nTo observe the clinical efficacy of minimally invasive acupotomy-injection technique with targeted three-point in the treatment of frozen shoulder.M\n[Percutaneous dynamic release in stress position by acupotomy in treating severe scapulohumeral periarthritis].\nTo investigate the clinical efficacy of acupotomy stress position percutaneous dynamic release for severe shoulder periarthritis.M\nl sequelae. Acupotomy, a modernized acupuncture form combining the effects of microsurgery and conventional acupuncture, may show specific benefits in the treatment of CP, especially with respect to\nBackground: Acupotomy, which involves the addition of a scalpel function to the conventional acupuncture treatment, has recently been applied as a conservative treatment method for lumbar disc herni\nhe methodological quality was medium-to-high in AMSTAR. All the included studies reviewed musculoskeletal disorders and reported a significantly higher total effective and cure rates in the acupotomy group for frozen shoulder, cervical spondylosis, third lumbar vertebrae transverse process syndrome, trigger finger, knee osteoarthritis, and lumbar spinal stenosis, compared to the other active control groups.CONCLUSION: Acupotomy showed promising results for some musculoskeletal disorders; however, additional high-quality evidence is",
"role": "user"
},
{
"content": "no",
"role": "assistant"
}
] |
science.data_reco_mcq_mc
|
science.data_reco_mcq_mc.1269
|
[
{
"content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: We propose a novel attention-aware feature composition framework for person ReID.\n\nKeyphrases: person re-identification images\n\nDataset description:\nPKU-Reid: This dataset contains 114 individuals including 1824 images captured from two disjoint camera views. For each person, eight images are captured from eight different orientations under one camera view and are normalized to 128x48 pixels.\nMarket-1501: Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector.\nMPII Cooking 2 Dataset: A dataset which provides detailed annotations for activity recognition.\nOccluded REID: Occluded REID is an occluded person dataset captured by mobile cameras, consisting of 2,000 images of 200 occluded persons (see Fig. (c)).\nCUHK03: The CUHK03 consists of 14,097 images of 1,467 different identities, where 6 campus cameras were deployed for image collection and each identity is captured by 2 campus cameras. This dataset provides two types of annotations, one by manually labelled bounding boxes and the other by bounding boxes produced by an automatic detector.\n\nOptions:- A: PKU-Reid B: Market-1501 C: MPII Cooking 2 Dataset D: Occluded REID E: CUHK03",
"role": "user"
},
{
"content": "[\"Market-1501\", \"CUHK03\"]",
"role": "assistant"
}
] |
science.qasa_abstractive_qa
|
science.qasa_abstractive_qa.1423
|
[
{
"content": "You will be shown an excerpt from a computer science scientific research paper, followed by a question about the paper. Please answer the question. Do not include any text in your response other than the answer.\n\nContext: We notice that state-of-the-art basic architectures such as Xception [3] and ResNeXt [40] become less efficient in extremely small networks because of the costly dense 1\\times 1 convolutions. We propose using pointwise group convolutions to reduce computation complexity of 1\\times 1 convolutions. To overcome the side effects brought by group convolutions, we come up with a novel channel shuffle operation to help the information flowing across feature channels. Based on the two techniques, we build a highly efficient architecture called ShuffleNet. Compared with popular structures like [30, 9, 40], for a given computation complexity budget, our ShuffleNet allows more feature map channels, which helps to encode more information and is especially critical to the performance of very small networks. Thanks to pointwise group convolution with channel shuffle, all components in ShuffleNet unit can be computed efficiently. Compared with ResNet [9] (bottleneck design) and ResNeXt [40], our structure has less complexity under the same settings. For example, given the input size c\\times h\\times w and the bottleneck channels m, ResNet unit requires hw(2cm+9m^{2}) FLOPs and ResNeXt has hw(2cm+9m^{2}/g) FLOPs, while our ShuffleNet unit requires only hw(2cm/g+9m) FLOPs, where g means the number of groups for convolutions. In other words, given a computational budget, ShuffleNet can use wider feature maps. We find this is critical for small networks, as tiny networks usually have an insufficient number of channels to process the information. From the results, we see that models with group convolutions (g>1) consistently perform better than the counterparts without pointwise group convolutions (g=1). Smaller models tend to benefit more from groups. For example, for ShuffleNet 1\\times the best entry (g=8) is 1.2% better than the counterpart, while for ShuffleNet 0.5\\times and 0.25\\times the gaps become 3.5% and 4.4% respectively. Note that group convolution allows more feature map channels for a given complexity constraint, so we hypothesize that the performance gain comes from wider feature maps which help to encode more information. In addition, a smaller network involves thinner feature maps, meaning it benefits more from enlarged feature maps. Table 2 also shows that for some models (e.g. ShuffleNet 0.5\\times) when group numbers become relatively large (e.g. g=8), the classification score saturates or even drops. With an increase in group number (thus wider feature maps), input channels for each convolutional filter become fewer, which may harm representation capability. Interestingly, we also notice that for smaller models such as ShuffleNet 0.25\\times larger group numbers tend to better results consistently, which suggests wider feature maps bring more benefits for smaller models. We use exactly the same settings to train these models. Results are shown in Table 4. Our ShuffleNet models outperform most others by a significant margin under different complexities. Interestingly, we find an empirical relationship between feature map channels and classification accuracy. For example, under the complexity of 38 MFLOPs, output channels of Stage 4 (see Table 1) for VGG-like, ResNet, ResNeXt, Xception-like, ShuffleNet models are 50, 192, 192, 288, 576 respectively, which is consistent with the increase of accuracy. Since the efficient design of ShuffleNet, we can use more channels for a given computation budget, thus usually resulting in better performance.\n\nQuestion: How ShuffleNet allowed more feature maps for a given computational complexity?",
"role": "user"
},
{
"content": "The ShuffleNet uses pointwise group convolution with channel shuffling, thus design-wise it has less complexity (requires hw(2cm/g+9m) FLOPs). This means it allows wider feature maps for a given computational budget. And the effect seems to increase the performance better as the model gets smaller.",
"role": "assistant"
}
] |
science.data_reco_mcq_mc
|
science.data_reco_mcq_mc.906
|
[
{
"content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: We survey semantic segmentation from two aspects: traditional method and recent methods based on deep neural network.\n\nKeyphrases: semantic image segmentation images\n\nDataset description:\nGSL: Dataset Description The Greek Sign Language (GSL) is a large-scale RGB+D dataset, suitable for Sign Language Recognition (SLR) and Sign Language Translation (SLT). The video captures are conducted using an Intel RealSense D435 RGB+D camera at a rate of 30 fps.\nCityscapes: Cityscapes is a large-scale database which focuses on semantic understanding of urban street scenes. It provides semantic, instance-wise, and dense pixel annotations for 30 classes grouped into 8 categories (flat surfaces, humans, vehicles, constructions, objects, nature, sky, and void).\nCOCO: The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\nImageNet: The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nTHEODORE: Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes.\nWord Sense Disambiguation: a Unified Evaluation Framework and Empirical Comparison: The Evaluation framework of Raganato et al. 2017 includes two training sets (SemCor-Miller et al., 1993- and OMSTI-Taghipour and Ng, 2015-) and five test sets from the Senseval/SemEval series (Edmonds and Cotton, 2001; Snyder and Palmer, 2004; Pradhan et al., 2007; Navigli et al., 2013; Moro and Navigli, 2015), standardized to the same format and sense inventory (i.e.\n\nOptions:- A: GSL B: Cityscapes C: COCO D: ImageNet E: THEODORE F: Word Sense Disambiguation: a Unified Evaluation Framework and Empirical Comparison",
"role": "user"
},
{
"content": "[\"Cityscapes\", \"COCO\", \"ImageNet\"]",
"role": "assistant"
}
] |
science.qasa_abstractive_qa
|
science.qasa_abstractive_qa.1553
|
[
{
"content": "You will be shown an excerpt from a computer science scientific research paper, followed by a question about the paper. Please answer the question. Do not include any text in your response other than the answer.\n\nContext: First, we observe that general-domain question-passage pairs can be acquired from community platforms Shah and Pomerantz (2010); Duan et al. (2017) or high quality academic datasets that are publicly available Kwiatkowski et al. (2019); Bajaj et al. (2016). Such resources have been used to create open domain QA passage retrieval models. However, as shown in Guo et al. (2020) and in our later experiments, neural retrieval models trained on the general domain data often do not transfer well, especially for specialized domains.\n\nQuestion: I am wondering why the authors used community QA datasets for training the question generator. How about open-domain information retrieval datasets, such as MS MARCO, which also cover diverse domains?",
"role": "user"
},
{
"content": "Using open-domain information retrieval datasets for training neural retrieval models do not transfer well especially for specialized domains.",
"role": "assistant"
}
] |
science.scitldr_aic
|
science.scitldr_aic.944
|
[
{
"content": "Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.\n\nPaper text:\n\nWe analyze speed of convergence to global optimum for gradient descent training a deep linear neural network by minimizing the L2 loss over whitened data. \nConvergence at a linear rate is guaranteed when the following hold: (i) dimensions of hidden layers are at least the minimum of the input and output dimensions; (ii) weight matrices at initialization are approximately balanced; and (iii) the initial loss is smaller than the loss of any rank-deficient solution. \nThe assumptions on initialization (conditions (ii) and (iii)) are necessary, in the sense that violating any one of them may lead to convergence failure. \nMoreover, in the important case of output dimension 1, i.e. scalar regression, they are met, and thus convergence to global optimum holds, with constant probability under a random initialization scheme. \nOur results significantly extend previous analyses, e.g., of deep linear residual networks (Bartlett et al., 2018).\nDeep learning builds upon the mysterious ability of gradient-based optimization methods to solve related non-convex problems.\nImmense efforts are underway to mathematically analyze this phenomenon.\nThe prominent landscape approach focuses on special properties of critical points (i.e. points where the gradient of the objective function vanishes) that will imply convergence to global optimum.\nSeveral papers (e.g. Ge et al. (2015) ; Lee et al. (2016) ) have shown that (given certain smoothness properties) it suffices for critical points to meet the following two conditions:\n(i) no poor local minima -every local minimum is close in its objective value to a global minimum; and\n(ii) strict saddle property -every critical point that is not a local minimum has at least one negative eigenvalue to its Hessian.\nWhile condition\n(i) does not always hold (cf.\nSafran and Shamir (2018) ), it has been established for various simple settings (e.g. Soudry and Carmon (2016) ; Kawaguchi (2016) ).\nCondition\n(ii) on the other hand seems less plausible, and is in fact provably false for models with three or more layers (cf. Kawaguchi FORMULA1 ), i.e. for deep networks.\nIt has only been established for problems involving shallow (two layer) models, e.g. matrix factorization (Ge et al. (2016) ; BID12 ).\nThe landscape approach as currently construed thus suffers from inherent limitations in proving convergence to global minimum for deep networks.A potential path to circumvent this obstacle lies in realizing that landscape properties matter only in the vicinity of trajectories that can be taken by the optimizer, which may be a negligible portion of the overall parameter space.\nSeveral papers (e.g. Saxe et al. (2014) ; BID1 ) have taken this trajectory-based approach, primarily in the context of linear neural networks -fully-connected neural networks with linear activation.\nLinear networks are trivial from a representational perspective, but not so in terms of optimization -they lead to non-convex training problems with multiple minima and saddle points.\nThrough a mix of theory and experiments, BID1 argued that such non-convexities may in fact be beneficial for gradient descent, in the sense that sometimes, adding (redundant) linear layers to a classic linear prediction model can accelerate the optimization.\nThis phenomenon challenges the holistic landscape view, by which convex problems are always preferable to non-convex ones.Even in the linear network setting, a rigorous proof of efficient convergence to global minimum has proved elusive.\nOne recent progress is the analysis of BID3 for linear residual networks -a particular subclass of linear neural networks in which the input, output and all hidden dimensions are equal, and all layers are initialized to be the identity matrix (cf.\nHardt and Ma (2016) ).\nThrough a trajectory-based analysis of gradient descent minimizing 2 loss over a whitened dataset (see Section 2), BID3 show that convergence to global minimum at a linear rateloss is less than > 0 after O(log 1 ) iterations -takes place if one of the following holds:\n(i) the objective value at initialization is sufficiently close to a global minimum; or\n(ii) a global minimum is attained when the product of all layers is positive definite.The current paper carries out a trajectory-based analysis of gradient descent for general deep linear neural networks, covering the residual setting of BID3 , as well as many more settings that better match practical deep learning.\nOur analysis draws upon the trajectory characterization of BID1 for gradient flow (infinitesimally small learning rate), together with significant new ideas necessitated due to discrete updates.\nUltimately, we show that when minimizing 2 loss of a deep linear network over a whitened dataset, gradient descent converges to the global minimum, at a linear rate, provided that the following conditions hold:\n(i) the dimensions of hidden layers are greater than or equal to the minimum between those of the input and output;\n(ii) layers are initialized to be approximately balanced (see Definition 1) -this is met under commonplace near-zero, as well as residual (identity) initializations; and\n(iii) the initial loss is smaller than any loss obtainable with rank deficiencies -this condition will hold with probability close to 0.5 if the output dimension is 1 (scalar regression) and standard (random) near-zero initialization is employed.\nOur result applies to networks with arbitrary depth and input/output dimensions, as well as any configuration of hidden layer widths that does not force rank deficiency (i.e. that meets condition\n(i)).\nThe assumptions on initialization (conditions\n(ii) and\n(iii)) are necessary, in the sense that violating any one of them may lead to convergence failure.\nMoreover, in the case of scalar regression, they are met with constant probability under a random initialization scheme.\nWe are not aware of any similarly general analysis for efficient convergence of gradient descent to global minimum in deep learning.The remainder of the paper is organized as follows.\nIn Section 2 we present the problem of gradient descent training a deep linear neural network by minimizing the 2 loss over a whitened dataset.\nSection 3 formally states our assumptions, and presents our convergence analysis.\nKey ideas brought forth by our analysis are demonstrated empirically in Section 4.\nSection 5 gives a review of relevant literature, including a detailed comparison of our results against those of BID3 .\nFinally, Section 6 concludes.\nFor deep linear neural networks, we have rigorously proven convergence of gradient descent to global minima, at a linear rate, provided that the initial weight matrices are approximately balanced and the initial end-to-end matrix has positive deficiency margin.\nThe result applies to networks with arbitrary depth, and any configuration of input/output/hidden dimensions that supports full rank, i.e. in which no hidden layer has dimension smaller than both the input and output.Our assumptions on initialization -approximate balancedness and deficiency margin -are both necessary, in the sense that violating any one of them may lead to convergence failure, as we demonstrated explicitly.\nMoreover, for networks with output dimension 1 (scalar regression), we have shown that a balanced initialization, i.e. a random choice of the end-to-end matrix followed by a balanced partition across all layers, leads assumptions to be met, and thus convergence to take place, with constant probability.\nRigorously proving efficient convergence with significant probability under customary layer-wise independent initialization remains an open problem.\nThe recent work of Shamir (2018) suggests that this may not be possible, as at least in some settings, the number of iterations required for convergence is exponential in depth with overwhelming probability.\nThis negative result, a theoretical manifestation of the \"vanishing gradient problem\", is circumvented by balanced initialization.\nThrough simple experiments we have shown that the latter can lead to favorable convergence in deep learning practice, as it does in theory.\nFurther investigation of balanced initialization, including development of variants for convolutional layers, is regarded as a promising direction for future research.The analysis in this paper uncovers special properties of the optimization landscape in the vicinity of gradient descent trajectories.\nWe expect similar ideas to prove useful in further study of gradient descent on non-convex objectives, including training losses of deep non-linear neural networks.A 2 LOSS OVER WHITENED DATA Recall the 2 loss of a linear predictor W ∈ R dy×dx as defined in Section 2: DISPLAYFORM0 By definition, when data is whitened, Λ xx is equal to identity, yielding: For approximate balancedness we have the following claim, which shows that it becomes more and more likely the smaller the standard deviation of initialization is: DISPLAYFORM1 Claim 2.\nAssume all entries in the matrices W j ∈ R dj ×dj−1 , j = 1, . . . , N , are drawn independently at random from a Gaussian distribution with mean zero and standard deviation s > 0.\nThen, for any δ > 0, the probability of W 1 , . . . , W N being δ-balanced is at least max{0, 1 − 10δ DISPLAYFORM2 In terms of deficiency margin, the claim below treats the case of a single output model (scalar regression), and shows that if the standard deviation of initialization is sufficiently small, with probability close to 0.5, a deficiency margin will be met.\nHowever, for this deficiency margin to meet a chosen threshold c, the standard deviation need be sufficiently large.Claim 3.\nThere is a constant C 1 > 0 such that the following holds.\nConsider the case where DISPLAYFORM3 13 and suppose all entries in the matrices W j ∈ R dj ×dj−1 , j = 1, . . . , N , are drawn independently at random from a Gaussian distribution with mean zero, whose standard deviation s > 0 is small with respect to the target, i.e. DISPLAYFORM4 , the probability of the end-to-end matrix W 1:N having deficiency margin c with respect to Φ is at least 0.49 if: DISPLAYFORM5 Proof.\nSee Appendix D.5.\n13 The requirement d0 ≥ 20 is purely technical, designed to simplify expressions in the claim.\n14 The probability 0.49 can be increased to any p < 1/2 by increasing the constant 10 5 in the upper bounds for s and c.15\nIt is not difficult to see that the latter threshold is never greater than the upper bound for s, thus sought-after standard deviations always exist.",
"role": "user"
},
{
"content": "We analyze gradient descent for deep linear neural networks, providing a guarantee of convergence to global optimum at a linear rate.",
"role": "assistant"
}
] |
science.bioasq_factoid_qa
|
science.bioasq_factoid_qa.747
|
[
{
"content": "You will be shown biomedical passages and your task is to the answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: Where in the body would the navicular bone be found?\n\nContext: The accessory navicular (AN) is an accessory ossicle anatomically located on the medial side of the foot, proximal to the navicular and continuous with the tibialis posterior tendon.\nA 16-year-old boy developed left foot pain of unknown cause that was unresponsive to conservative treatment, associated with fever and difficulty walking. He was admitted to our hospital with osteomyelitis of the accessory and body of the navicular bone",
"role": "user"
},
{
"content": "foot",
"role": "assistant"
}
] |
science.qasa_abstractive_qa
|
science.qasa_abstractive_qa.1435
|
[
{
"content": "You will be shown an excerpt from a computer science scientific research paper, followed by a question about the paper. Please answer the question. Do not include any text in your response other than the answer.\n\nContext: Finally, we test how well the best convolutional cells may perform in a resource-constrained setting, e.g., mobile devices (Table 3). In these settings, the number of floating point operations is severely constrained and predictive performance must be weighed against latency requirements on a device with limited computational resources. MobileNet [24] and ShuffleNet [70] provide state-of-the-art results obtaining 70.6% and 70.9\\% accuracy, respectively on 224x224 images using \\sim550M multliply-add operations. An architecture constructed from the best convolutional cells achieves superior predictive performance (74.0% accuracy) surpassing previous models but with comparable computational demand. In summary, we find that the learned convolutional cells are flexible across model scales achieving state-of-the-art performance across almost 2 orders of magnitude in computational budget.\n\nQuestion: Does NASNets perform better than MobileNet, ShuffleNet under resource-constraint setting?",
"role": "user"
},
{
"content": "From the above evidential sentence, it is obvious that NASNets with 74% accuracy perform better than MobileNet and ShuffleNet with 70.6% and 70.9% accuracies respectively.",
"role": "assistant"
}
] |
science.ncbi_ner
|
science.ncbi_ner.468
|
[
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Disease\"].\n\nPlease return the output as a JSON object of the format: {\"Disease\" : [\"COVID-19\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nRecurrent meningococcal meningitis with absence of the sixth component of complement: an evaluation of underlying immunologic mechanisms. A 51/2-year-old black girl with recurrent meningococcal meningitis and absence of the sixth component of complement (C6) is reported. To explore the pathogenesis of recurrent neisserial infections in C6 deficiency, a detailed analysis of her immune competence was conducted. Her serum had normal chemotactic, opsonic, alternative complement pathway, and specific antibody activity, but lacked complement-mediated bacteriolytic activity. In addition, her C6-deficient serum was indistinguishable from normal serum in a complement-dependent assay of phagocyte bactericidal activity. Absent bacteriolysis remains the only consistent defect associated with recurrent neisserial infections and absence of one of the late-acting complement components..",
"role": "user"
},
{
"content": "{\"Disease\": [\"C6 deficiency\", \"meningococcal meningitis\", \"C6-deficient\", \"neisserial infections\", \"absence of the sixth component of complement\"]}",
"role": "assistant"
}
] |
science.annotated_materials_syntheses_events
|
science.annotated_materials_syntheses_events.120
|
[
{
"content": "You will be shown an abstract from a materials science research paper. Your task is to extract all synthesis operations described in the abstract into a structured JSON format. Your output should be a JSON array. Each entry in the array should be a JSON object describing a single synthesis operation. This object should have the following fields:\n\n- \"trigger\": A verbal predicate indicating a synthesis operation. This should be a string extracted directly from the abstract.\n- \"arguments\": An array of arguments specifying aspects of the synthesis operation. Each entry in the array should be a length-2 array formatted as [<argument_text>, <argument_type>]. The `argument_text` should be a string extracted directly from the abstract. The `argument_type` should be one of the following types: [\"Recipe_Precursor\", \"Atmospheric_Material\", \"Participant_Material\", \"Recipe_Target\", \"Solvent_Material\"].\n\nFor instance, suppose an excerpt from the abstract reads:\n\nTetraethoxysilane (TEOS, 98%) and cetyltrimethylammonium bromide (CTAB, 98%) were purchased from Fluka. The MTTZ-siloxy derivative (triethoxysilyl-propyl-5-mercapto-1-methyl-tetrazole) was synthesized as previously described [38].\n\nThis excerpt includes two operations, which could be annotated as follows.\n\n[{\"arguments\": [[\"Tetraethoxysilane\", \"Recipe_Precursor\"], [\"cetyltrimethylammonium bromide\", \"Recipe_Precursor\"]], \"trigger\": \"purchased\"}, {\"arguments\": [[\"MTTZ-siloxy derivative\", \"Recipe_Target\"]], \"trigger\": \"synthesized\"}]\n\nIn your response, please include the JSON and no other text.\n\nAbstract:\n10.1016/j.cattod.2013.04.013\nEthanol steam reforming over NiLaZr and NiCuLaZr mixed metal oxide catalysts\n\n\nNickel nitrate Ni(NO3)2*6H2O, copper nitrate (Cu(NO3)2*2.5H2O, lanthanum nitrate (La(NO3)3*5H2O, 70% (w/w) zirconium propoxide Zr(OCH2CH2CH3)4 in 1-propanol, and oxalic acid (HO2CCO2H) were purchased from Sigma-Aldrich. Ethanol was purchased from Biopack. All chemicals were reagent-grade and were used as received.\n\nNiLaZr and NiCuLaZr catalysts were prepared by co-precipitation following the addition of oxalic acid to an ethanolic solution of Cu(NO3)2, Ni(NO3)2, La(NO3)3 and Zr(OCH2CH2CH3)4. The precipitates were washed with ethanol, dried at 80 degC, and calcined for 2 h at different temperatures: 700 degC, 850 degC, 900 degC and 950 degC. The resulting NiLaZr catalysts contained 5% of Ni, and the NiCuLaZr catalysts contained 4% of Ni and 1% of Cu, on a weight basis. Where required, the final calcination temperature (degC) is indicated in the catalyst denotation.",
"role": "user"
},
{
"content": "[{\"arguments\": [[\"lanthanum nitrate\", \"Recipe_Precursor\"], [\"1-propanol\", \"Solvent_Material\"], [\"zirconium propoxide\", \"Recipe_Precursor\"], [\"Nickel nitrate\", \"Recipe_Precursor\"], [\"copper nitrate\", \"Recipe_Precursor\"], [\"oxalic acid\", \"Participant_Material\"]], \"trigger\": \"purchased\"}, {\"arguments\": [[\"Ethanol\", \"Solvent_Material\"]], \"trigger\": \"purchased\"}, {\"arguments\": [[\"NiLaZr\", \"Recipe_Target\"], [\"NiCuLaZr\", \"Recipe_Target\"]], \"trigger\": \"prepared\"}, {\"arguments\": [[\"Cu(NO3)2\", \"Recipe_Precursor\"], [\"Ni(NO3)2\", \"Recipe_Precursor\"], [\"Zr(OCH2CH2CH3)4\", \"Recipe_Precursor\"], [\"La(NO3)3\", \"Recipe_Precursor\"], [\"oxalic acid\", \"Participant_Material\"]], \"trigger\": \"addition\"}, {\"arguments\": [[\"ethanol\", \"Solvent_Material\"], [\"precipitates\", \"Participant_Material\"]], \"trigger\": \"washed\"}]",
"role": "assistant"
}
] |
science.scitldr_aic
|
science.scitldr_aic.1010
|
[
{
"content": "Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.\n\nPaper text:\n\nRecent research efforts enable study for natural language grounded navigation in photo-realistic environments, e.g., following natural language instructions or dialog.\nHowever, existing methods tend to overfit training data in seen environments and fail to generalize well in previously unseen environments.\nIn order to close the gap between seen and unseen environments, we aim at learning a generalizable navigation model from two novel perspectives:\n\n(1) we introduce a multitask navigation model that can be seamlessly trained on both Vision-Language Navigation (VLN) and Navigation from Dialog History (NDH) tasks, which benefits from richer natural language guidance and effectively transfers knowledge across tasks;\n\n(2) we propose to learn environment-agnostic representations for navigation policy that are invariant among environments, thus generalizing better on unseen environments.\n\nExtensive experiments show that our environment-agnostic multitask navigation model significantly reduces the performance gap between seen and unseen environments and outperforms the baselines on unseen environments by 16% (relative measure on success rate) on VLN and 120% (goal progress) on NDH, establishing the new state of the art for NDH task.\nNavigation in visual environments by following natural language guidance (Hemachandra et al., 2015) is a fundamental capability of intelligent robots that simulate human behaviors, because humans can easily reason about the language guidance and navigate efficiently by interacting with the visual environments.\nRecent efforts (Anderson et al., 2018b; Das et al., 2018; Thomason et al., 2019) empower large-scale learning of natural language grounded navigation that is situated in photorealistic simulation environments.\nNevertheless, the generalization problem commonly exists for these tasks, especially indoor navigation: the agent usually performs poorly on unknown environments that have never been seen during training.\nOne of the main causes for such behavior is data scarcity as it is expensive and time-consuming to extend either visual environments or natural language guidance.\nThe number of scanned houses for indoor navigation is limited due to high expense and privacy concerns.\nBesides, unlike vision-only navigation tasks (Mirowski et al., 2018; Xia et al., 2018; Manolis Savva* et al., 2019; Kolve et al., 2017) where episodes can be exhaustively sampled in simulation, natural language grounded navigation is supported by human demonstrated interaction and communication in natural language.\nIt is impractical to fully collect and cover all the samples for individual tasks.\nTherefore, it is essential though challenging to efficiently learn a more generalized policy for natural language grounded navigation tasks from existing data (Wu et al., 2018a; b) .\nIn this paper, we study how to resolve the generalization and data scarcity issues from two different angles.\nFirst, previous methods are trained for one task at the time, so each new task requires training a brand new agent instance that can only solve the one task it was trained on.\nIn this work, we propose a generalized multitask model for natural language grounded navigation tasks such as Vision-Language Navigation (VLN) and Navigation from Dialog History (NDH), aiming at efficiently transferring knowledge across tasks and effectively solving both tasks with one agent simultaneously.\nMoreover, although there are thousands of trajectories paired with language guidance, the underlying house scans are restricted.\nFor instance, the popular Matterport3D dataset (Chang et al., 2017) contains only 61 unique house scans in the training set.\nThe current models perform much better in seen environments by taking advantage of the knowledge of specific houses they have acquired over multiple task completions during training, but fail to generalize to houses not seen during training.\nHence we propose an environment-agnostic learning method to learn a visual representation that is invariant to specific environments but still able to support navigation.\nEndowed with the learned environment-agnostic representations, the agent is further prevented from the overfitting issue and generalizes better on unseen environments.\nTo the best of our knowledge, we are the first to introduce natural language grounded multitask and environment-agnostic training regimes and validate their effectiveness on VLN and NDH tasks.\nExtensive experiments demonstrate that our environment-agnostic multitask navigation model can not only efficiently execute different language guidance in indoor environments but also outperform the single-task baseline models by a large margin on both tasks.\nBesides, the performance gap between seen and unseen environments is significantly reduced.\nWe also set a new state of the art on NDH with over 120% improvement in terms of goal progress.\nIn this work, we show that the model trained using environment-agnostic multitask learning approach learns a generalized policy for the two natural language grounded navigation tasks.\nIt closes down the gap between seen and unseen environments, learns more generalized environment representations and effectively transfers knowledge across tasks outperforming baselines on both the tasks simultaneously by a significant margin.\nAt the same time, the two approaches independently benefit the agent learning and are complementary to each other.\nThere are possible future extensions to our work-the MT-RCM can further be adapted to other language-grounded navigation datasets, such as those using Street View (e.g., Touchdown (Chen et al., 2019) Table 6 presents a more detailed ablation of Table 5 using different parts of dialog history.\nThe results prove that agents rewarded for getting closer to the goal room consistently outperform agents rewarded for getting closer to the exact goal location.\nTable 7 presents a more detailed analysis from Table 3 with access to different parts of dialog history.\nThe models with shared language encoder consistently outperform those with separate encoders.\nFigure 4: Visualizing performance gap between seen and unseen environments for VLN and NDH tasks.\nFor VLN, the plotted metric is agent's success rate while for NDH, the metric is agent's progress.",
"role": "user"
},
{
"content": "We propose to learn a more generalized policy for natural language grounded navigation tasks via environment-agnostic multitask learning.",
"role": "assistant"
}
] |
science.chia_ner
|
science.chia_ner.1127
|
[
{
"content": "You will be shown inclusion or exclusion criteria for a clinical trial. Given these criteria, your task is to extract all unique entities of the following types: [\"Observation\", \"Condition\", \"Procedure\", \"Drug\", \"Temporal\", \"Multiplier\", \"Qualifier\", \"Negation\", \"ReferencePoint\", \"Scope\", \"Value\", \"Mood\", \"Device\", \"Visit\", \"Measurement\", \"Person\"].\n\nPlease return the output as a JSON object of the format: {\"Condition\" : [\"breast cancer\", ...], \"Device\" : [\"ports\", ...], \"Drug\" : [\"Atenolol\", ...], \"Measurement\" : [\"blood pressure\", ...], \"Mood\" : [\"suspected\", ...], \"Multiplier\" : [\"more than two\", ...], \"Negation\" : [\"no\", ...], \"Observation\" : [\"history\", ...], \"Person\" : [\"male\", ...], \"Procedure\" : [\"surgery\", ...], \"Qualifier\" : [\"severe\", ...], \"ReferencePoint\" : [\"enrollment\", ...], \"Scope\" : [\"infection\", ...], \"Temporal\" : [\"one day\", ...], \"Value\" : [\"50%\", ...], \"Visit\" : [\"ICU\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nPatients older than 18 years\nClassification of the American Society of Anesthesiologists (ASA I-III)\nNo cognitive deficits\nSigned informed consent prior to surgery",
"role": "user"
},
{
"content": "{\"Condition\": [\"cognitive deficits\"], \"Device\": [], \"Drug\": [], \"Measurement\": [\"Classification of the American Society of Anesthesiologists\", \"ASA\"], \"Mood\": [], \"Multiplier\": [], \"Negation\": [\"No\"], \"Observation\": [], \"Person\": [\"years\"], \"Procedure\": [], \"Qualifier\": [], \"ReferencePoint\": [], \"Scope\": [], \"Temporal\": [], \"Value\": [\"I-III\", \"older than 18\"], \"Visit\": []}",
"role": "assistant"
}
] |
science.bioasq_factoid_qa
|
science.bioasq_factoid_qa.1335
|
[
{
"content": "You will be shown biomedical passages and your task is to the answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: In what part of the body is the masseter muscle located?\n\nContext: Intramuscular hemangiomas of the masseter muscle are uncommon tumors and therefore can be difficult to accurately diagnose preoperatively, due to the unfamiliar presentation and deep location in the lateral face. A case of intramuscular hemangioma of the masseter muscle in a 66-yearold woman is presented.\nAs part of this study, a comparative analysis of the temporal and masseter muscle electrical activity at rest and during mandible excursion positions (protrusion, laterotrusion and maximal occlusion) was performed among patients aged 21 to 68 years.\nMasticatory myofascial trigger points (TrP) are one of the major causes of nondental pain in the orofacial region. Intramuscular injections are considered the first-line treatment for myofascial TrPs. The objectives of this study were to evaluate and compare the effectiveness of local anesthesia (LA), botulinum toxin (BTX), and platelet-rich plasma (PRP) injections for the treatment of myofascial TrPs in the masseter muscle.\nThe organ extends from below the zygomatic bone line to the middle of the mandible body, between the skin and the masseter muscle, on both sides of the animal.\nThe reported lesions were located in cheek masseter muscle, parotid gland, upper neck, upper gingiva and body of mandibular.\nThe enormous strength of jaw closure is in large part due to the pinnated arrangement of the muscle fibres in the masseter.\norted lesions were located in cheek masseter muscle, parotid gland, upper neck, upper gingiva and body of mandibular. The clin\ne masseter muscle is the most frequent site and accounts for approximately 5% of all intramuscular vascular malformations in the head and neck region. M\ne relationship between the masseter muscle and the mandible, including its ramal and body components, was chosen as the model for study in nineteen human fetuses (ages 16 to 36 weeks). Cr\nFacial musculature is divided into masticatory muscles, i.e. M. masseter and M. buccalis, with bony insertions and smaller facial muscles involved in facial expression, which insert into bone and skin. T\nwo rigid bodies, the upper and lower jaws, including three dominant muscles, i.e. the masseter, the anterior portion of the temporalis and the lateral pterygoid. Static equilibrium\nBenign hypertrophy of the masseter muscle is an uncommon entity important in the differential diagnosis of head and neck masses, particularly a unilateral mass located in the cheek\nFirst, jaw opening that was produced by mechanically pulling down the mandible evoked an optical response, which reflects neural excitation, in two cortical regions: the most rostroventral part of the primary somatosensory cortex (S1) and the border between the ventral part of the secondary somatosensory cortex (S2) and the insular oral region (IOR)\nMasseter nerve stimulation initially excited the rostral part of the S2/IOR region, and an adjacent region responded to jaw opening\nThe caudal part of the region showing the maximum response overlapped with the region responding to jaw opening, whereas the rostral part overlapped with the region responding to electrical stimulation of the maxillary and mandibular molar pulps\nThe masseter muscle is an integral part of the oral facial complex and one of the muscles of mastication\n1% of cases. Masseter muscle localization is most common in head a\nAmong the head and neck muscles, masseter muscle is the most common location, with the rate of 4.9%.\nBenign hypertrophy of the masseter muscle is an uncommon entity important in the differential diagnosis of head and neck masses, particularly a unilateral mass located in the cheek.\nascular neoplasm Welsch and Hengerer, 1980 [4]. Of these 13.8% occur in the head and neck region, with the masseter muscle being the most common site, followed by the trapezi\nIt was concluded that: 1) radiopaque muscle markers are a valuable tool for analysis of muscle growth and alteration of muscle location; 2) the masseter muscle in the rhesus monkey undergoes elongation, probably due to addition of sarcomeres at the fiber-tendon junctions; and 3) posterior migration of the masseter muscle relative to the corpus of the mandible, probably due to the nature of its periosteal attachment, results in a stability of the anteroposterior position of the masseter muscle despite the anterior displacement of the mandible.\nAIM: The masseter muscle is often exploited by craniofacial surgeons in transposition operations to correct facial palsy, benign masseteric hypertrophy; or neurectomy-induced atrophy of the mu\nThe condition of a 36-year-old patient who applied to our clinic with the complaints of progressively increasing pain and progressively growing mass in the right cheek that appeared 1.5 years ago was diagnosed as arteriovenous malformation located in the masseter muscle.\nIn the cattle, goat, sheep and Sika deer, the rostral layer of the masseter muscle arises from the facial crest with its fleshy portion and is inserted into the tubercle on the mandible through the strong tendinous sheet.\nLess than 1% of the vascular tumors are localized in a muscle, 15% of them are in the head and neck muscles",
"role": "user"
},
{
"content": "Jaw, or head, or face, or orofacial region, or mandible",
"role": "assistant"
}
] |
science.bioasq_factoid_qa
|
science.bioasq_factoid_qa.47
|
[
{
"content": "You will be shown biomedical passages and your task is to the answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: Which genome browser database for DNA shape annotations is available?\n\nContext: GBshape: a genome browser database for DNA shape annotations.\nOur Genome Browser for DNA shape annotations (GBshape; freely available at http://rohslab.cmb.usc.edu/GBshape/) provides minor groove width, propeller twist, roll, helix twist and hydroxyl radical cleavage predictions for the entire genomes of 94 organisms. Additional genomes can easily be added using the GBshape framework. GBshape can be used to visualize DNA shape annotations qualitatively in a genome browser track format, and to download quantitative values of DNA shape features as a function of genomic position at nucleotide resolution. As biological applications, we illustrate the periodicity of DNA shape features that are present in nucleosome-occupied sequences from human, fly and worm, and we demonstrate structural similarities between transcription start sites in the genomes of four Drosophila species.\nGBshape can be used to visualize DNA shape annotations qualitatively in a genome browser track format, and to download quantitative values of DNA shape features as a function of genomic position at nucleotide resolution.\nOur Genome Browser for DNA shape annotations (GBshape; freely available at http://rohslab.cmb.usc.edu/GBshape/) provides minor groove width, propeller twist, roll, helix twist and hydroxyl radical cleavage predictions for the entire genomes of 94 organisms.\nGBshape can be used to visualize DNA shape annotations qualitatively in a genome browser track format, and to download quantitative values of DNA shape features as a function of genomic position at nucleotide resolution.\nGBshape can be used to visualize DNA shape annotations qualitatively in a genome browser track format, and to download quantitative values of DNA shape features as a function of genomic position at nucleotide resolution. As biological applications, we illustrate the periodicity of DNA shape features that are present in nucleosome-occupied sequences from human, fly and worm, and we demonstrate structural similarities between transcription start sites in the genomes of four Drosophila species.\nWhereas higher-order effects, such as chromatin accessibility, cooperativity and cofactors, have been described, DNA shape recently gained attention as another feature that fine-tunes the DNA binding specificities of some transcription factor families. Our Genome Browser for DNA shape annotations (GBshape; freely available at http://rohslab.cmb.usc.edu/GBshape/) provides minor groove width, propeller twist, roll, helix twist and hydroxyl radical cleavage predictions for the entire genomes of 94 organisms.\nAdditional genomes can easily be added using the GBshape framework. GBshape can be used to visualize DNA shape annotations qualitatively in a genome browser track format, and to download quantitative values of DNA shape features as a function of genomic position at nucleotide resolution.\nOur Genome Browser for DNA shape annotations (GBshape; freely available at http://rohslab.cmb.usc.edu/GBshape/) provides minor groove width, propeller twist, roll, helix twist and hydroxyl radical cleavage predictions for the entire genomes of 94 organisms. Additional genomes can easily be added using the GBshape framework.\nGBshape can be used to visualize DNA shape annotations qualitatively in a genome browser track format, and to download quantitative values of DNA shape features as a function of genomic position at nucleotide resolution.\nOur Genome Browser for DNA shape annotations (GBshape; freely available at http://rohslab.cmb.usc.edu/GBshape/) provides minor groove width, propeller twist, roll, helix twist and hydroxyl radical cleavage predictions for the entire genomes of 94 organisms.",
"role": "user"
},
{
"content": "GBshape",
"role": "assistant"
}
] |
science.mslr2022_cochrane_multidoc_summarization
|
science.mslr2022_cochrane_multidoc_summarization.686
|
[
{
"content": "You will be provided with the titles and abstracts of a collection of medical RCT reports studying a clinical question -- for instance, the efficacy of an intervention on some medical outcome. Please write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article. Your response should not include any other text. The RCT reports are below.\n\n\n\nTitle 1:\nZiprasidone 40 and 120 mg/day in the acute exacerbation of schizophrenia and schizoaffective disorder: a 4-week placebo-controlled trial.\nAbstract 1:\nA double-blind, placebo-controlled, multicenter study, was performed to evaluate the efficacy and safety of ziprasidone in 139 patients with an acute exacerbation of schizophrenia or schizoaffective disorder. Patients were randomized to receive ziprasidone 40 mg/day, 120 mg/day or placebo for 28 days. Ziprasidone 120 mg/day was significantly more effective than placebo in improving the BPRS total, CGI-S. BPRS depression cluster and BPRS anergia cluster scores (all P < 0.05). Similarly, the percentages of patients classified as responders on the BPRS (> or = 30% reduction) and the CGI improvement (score < or = 2) were significantly greater with ziprasidone 120 mg/day compared with placebo (P < 0.05). The number of patients who experienced an adverse event was similar in all three treatment groups, and discontinuation due to adverse events was rare (five of 91 ziprasidone-treated patients). The most frequently reported adverse events, that were more common in either ziprasidone group than in the placebo group, were dyspepsia, constipation, nausea and abdominal pain. There was a notably low incidence extrapyramidal side-effects (including akathisia) and postural hypotension and no pattern of laboratory abnormalities or apparent weight gain. Ziprasidone-treated patients were not clinically different from placebo-treated patients on the Simpson-Angus Rating scale, Barnes Akathisia scale and AIMS assessments. These results indicate that ziprasidone 120 mg/day is effective in the treatment of the positive, negative and affective symptoms of schizophrenia and schizoaffective disorder with a very low side-effect burden.\n\n\nTitle 2:\nAn exploratory haloperidol-controlled dose-finding study of ziprasidone in hospitalized patients with schizophrenia or schizoaffective disorder.\nAbstract 2:\nNinety patients with schizophrenia or schizoaffective disorder according to DSM-III-R criteria participated in this double-blind, exploratory, dose-ranging trial. After a single-blind washout period of 4 to 7 days, patients were randomly assigned to receive one of four fixed doses of the new antipsychotic, ziprasidone 4 (N = 19), 10 (N = 17), 40 (N = 17), or 160 (N = 20) mg/day or haloperidol 15 mg/day (N = 17) for 4 weeks. A dose-response relationship among ziprasidone groups was established for improvements in Clinical Global Impression Severity (CGI-S) score (p = 0.002) but not in Brief Psychiatric Rating Scale (BPRS) total score (p = 0.08). The intent-to-treat analysis of mean changes from baseline in the BPRS total, BPRS Psychosis core, and CGI-S scores demonstrated that ziprasidone 160 mg/day was comparable with haloperidol in reducing overall psychopathology and positive symptoms and was superior to ziprasidone 4 mg/day. Despite the small sample size and short duration of the trial, the improvement in CGI-S with both ziprasidone 160 mg/day and haloperidol 15 mg/day was statistically significantly greater than with ziprasidone 4 mg/day (p = 0.001 andp = 0.005, respectively). The percentage of patients classified as responders on both the BPRS total (> or = 30% improvement) and CGI-Improvement (score of 1 or 2) scales in the ziprasidone 160 mg/day group was similar to that in the haloperidol group and nonsignificantly greater than that in the ziprasidone 4 mg/day group. On all assessments of clinical efficacy, the improvements associated with ziprasidone 4 mg/day, 10 mg/day, and 40 mg/day were similar. Concomitant benztropine use at any time during the study was less frequent with ziprasidone 160 mg/day (15%) than with haloperidol (53%). Haloperidol was associated with a sustained hyperprolactinemia, unlike ziprasidone, where only transient elevations in prolactin that returned to normal within the dosing interval were observed. Ziprasidone was well tolerated, and the incidence of adverse events was similar in all groups. The results of this study suggest that ziprasidone 160 mg/day is as effective as haloperidol 15 mg/day in reducing overall psychopathology and positive symptoms of an acute exacerbation of schizophrenia or schizoaffective disorder but has a lower potential to induce extrapyramidal symptoms.\n\n\nTitle 3:\nZiprasidone 80 mg/day and 160 mg/day in the acute exacerbation of schizophrenia and schizoaffective disorder: a 6-week placebo-controlled trial. Ziprasidone Study Group.\nAbstract 3:\nIn this double-blind study, patients with an acute exacerbation of schizophrenia or schizoaffective disorder were randomized to receive either ziprasidone 80 mg/day (n = 106) or 160 mg/day (n = 104) or placebo (n = 92), for 6 weeks. Both doses of ziprasidone were statistically significantly more effective than placebo in improving the PANSS total, BPRS total, BPRS core items, CGI-S, and PANSS negative subscale scores (p < .05). Ziprasidone 160 mg/day significantly improved depressive symptoms in patients with clinically significant depression at baseline (MADRS > or = 14, over-all mean 23.5) (p < .05) as compared with placebo. The percentage of patients experiencing adverse events was similar in each treatment group, and resultant discontinuation was rare. The most frequent adverse events associated with ziprasidone were generally mild dyspepsia, nausea, dizziness, and transient somnolence. Ziprasidone was shown to have a very low liability for inducing movement disorders and weight gain. The results indicate that ziprasidone is effective and well tolerated in the treatment of the positive, negative, and depressive symptoms of an acute exacerbation of schizophrenia or schizoaffective disorder.",
"role": "user"
},
{
"content": "Currently data are limited. Ziprasidone may be an effective antipsychotic with less extrapyramidal effects than haloperidol. It also, however, causes more nausea and vomiting than the typical drugs, and, at present, there is no data suggesting that it is different to other atypical compounds. Well planned, conducted and reported long term randomised trials are needed if ziprasidone is to be accepted into everyday use.",
"role": "assistant"
}
] |
science.bioasq_general_qa
|
science.bioasq_general_qa.1060
|
[
{
"content": "You will be shown a question, followed by excerpts from biomedical research papers. Please answer the question based on the provided context. Do not include any text in your response other than the answer.\n\nQuestion: What is OHRQoL?\n\nContext: To assess oral health-related quality of life (OHRQoL)\noral health-related quality of life (OHRQoL) of an individual. Four main dimensions, oral function, oro-facial pain, oro-facial appearance and psychosocial impact, are suggested to cover the concept of OHRQoL.\nSeveral studies have shown that orthodontic anomalies may affect young people's Oral Health-Related Quality of Life (OHRQoL).\nThis cross-sectional study compared the Oral-Health-Related Quality of Life (OHRQoL) in HIV negative patients (\nTo compare oral health-related quality of life (OHRQoL) and masticatory performance (MP) in patients treated with a mandibular complete denture (CD) and immediately loaded implant-supported prostheses\no evaluate the performance of complete dentures (CD) with anatomical and nonanatomical teeth in completely edentulous elderly individuals regarding oral health-related quality of life (OHRQoL),\nHow they perceive their oral health-related quality of life (OHRQoL) is important for health-care provider for understanding and planning in patient management.\nThe assessment of the oral health-related quality of life (OHRQoL) is possible with the Oral Health Impact Profile-14 (OHIP-14) questionnaire comprising 7 subdomains: functional limitation, physical pain, psychological discomfort, physical disability, psychological disability, social disability, and handicap\nThe OHRQoL is a concept that surpasses an exclusively clinical perception and includes functional, social, emotional, and environmental issues.\nOral health-related quality of life (OHRQoL) is a multidimensional construct that involves subjective evaluation of an individual's oral health.\nBACKGROUND: Oral health-related quality of life (OHRQoL) is a multidimensional, perception-based measure of how oral health affects social and physical functioning and self-image.\nSTATEMENT OF PROBLEM: Oral health-related quality of life (OHRQoL) is a subjective measure that assesses a person's perception of oral health.\nBACKGROUND: There is a lack of cohort studies on the influence factors of oral health-related quality of life (OHRQoL).\nOral health-related quality of life (OHRQoL) is an important dental patient-reported outcome which is commonly based on 4 dimensions, namely Oral Function, Orofacial Pain, Orofacial Appearance and Psychosocial Impact. The Or\nOBJECTIVES: Oral health-related quality of life (OHRQoL) is an important health indicator not only for individuals but also for oral health care providers to treat a person holis\nBACKGROUND: Oral health-related quality of life (OHRQoL) is a multidimensional, perception-based measure of how oral health affects social and physical functioning and sel\nOral health-related quality of life (OHRQoL) is a multidimensional construct that measures well-being associated with the teeth, mouth, and face. This\nOral health-related quality of life (OHRQoL) is an important aspect of health outcomes and its assessment should be made using validated instruments. The\nOral health-related quality of life (OHRQOL) is the perceived impact of one's own oral health on daily life.\nOral health-related quality of life (OHRQOL) is the component of health-related quality of life that relates to the effects of oral diseases and dental interventions on patients.\nUNLABELLED: Oral Health-Related Quality of Life (OHRQOL) is the shift in the perception of health from merely the absence of disease and infirmity to complete physical, mental and social\nBACKGROUND: Oral health-related quality of life (OHRQoL) is a multidimensional, perception-based measure of how oral health affects social and physical functioning and\nOral health-related quality of life (OHRQOL) is a relatively new but rapidly growing notion.\nOral health-related quality of life (OHRQOL) is a relatively new but rapidly growing notion.\nBACKGROUND: The Franciscan Hospital for Children Oral Health-Related Quality of Life questionnaire (FHC-OHRQOL-Q) is an instrument designed specifically for parents and caregivers of patients with special needs that has not yet been appli\nOral health-related quality of life (OHRQoL) is an important dental patient-reported outcome which is commonly based on 4 dimensions, namely Oral Function, Orofacial Pain, Orofacial Appearance and Psychosocial Impact.\nOral health-related quality of life (OHRQoL) is expected to be multidimensional.\nPURPOSE: Oral health-related quality of life (OHRQoL) is a construct for assessing the self-perceived oral health of\nSTATEMENT OF PROBLEM: Oral health-related quality of life (OHRQoL) is a subjective measure that assesses a person's perc\nUNLABELLED: Oral health-related quality of life (OHRQOL) in edentulous patients with complete dentures is oft",
"role": "user"
},
{
"content": "The assessment of the oral health-related quality of life (OHRQoL) is possible with the Oral Health Impact Profile-14 (OHIP-14) questionnaire comprising 7 subdomains: functional limitation, physical pain, psychological discomfort, physical disability, psychological disability, social disability, and handicap Diseases and disorders that damage the mouth and face can disturb well-being and his self-esteem. Oral health-related quality of life (OHRQOL) is a relatively new but rapidly growing concept and can be assessed using a number of different methods including standardized questionnaires. The OHRQoL is a concept that includes functional, social, emotional, and environmental issues. How patients perceive their oral health-related quality of life (OHRQoL) is important for health-care provider for understanding and planning in patient management.",
"role": "assistant"
}
] |
science.chia_ner
|
science.chia_ner.1174
|
[
{
"content": "You will be shown inclusion or exclusion criteria for a clinical trial. Given these criteria, your task is to extract all unique entities of the following types: [\"Observation\", \"Condition\", \"Procedure\", \"Drug\", \"Temporal\", \"Multiplier\", \"Qualifier\", \"Negation\", \"ReferencePoint\", \"Scope\", \"Value\", \"Mood\", \"Device\", \"Visit\", \"Measurement\", \"Person\"].\n\nPlease return the output as a JSON object of the format: {\"Condition\" : [\"breast cancer\", ...], \"Device\" : [\"ports\", ...], \"Drug\" : [\"Atenolol\", ...], \"Measurement\" : [\"blood pressure\", ...], \"Mood\" : [\"suspected\", ...], \"Multiplier\" : [\"more than two\", ...], \"Negation\" : [\"no\", ...], \"Observation\" : [\"history\", ...], \"Person\" : [\"male\", ...], \"Procedure\" : [\"surgery\", ...], \"Qualifier\" : [\"severe\", ...], \"ReferencePoint\" : [\"enrollment\", ...], \"Scope\" : [\"infection\", ...], \"Temporal\" : [\"one day\", ...], \"Value\" : [\"50%\", ...], \"Visit\" : [\"ICU\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nContraindication to Filgrastim",
"role": "user"
},
{
"content": "{\"Condition\": [\"Contraindication\"], \"Device\": [], \"Drug\": [\"Filgrastim\"], \"Measurement\": [], \"Mood\": [], \"Multiplier\": [], \"Negation\": [], \"Observation\": [], \"Person\": [], \"Procedure\": [], \"Qualifier\": [], \"ReferencePoint\": [], \"Scope\": [], \"Temporal\": [], \"Value\": [], \"Visit\": []}",
"role": "assistant"
}
] |
science.covid_deepset_qa
|
science.covid_deepset_qa.1628
|
[
{
"content": "You will be shown a paragraph from a biomedical research paper, followed by a question about the paper. Please extract text from the paragraph to answer the question. Do not include any text in your repsonse other than the answer.\n\nParagraph: This is especially evident in weak type 1 inflammation-inducing viruses such as RV and RSV (Kling et al., 2005; Wood et al., 2011; Ravi et al., 2019) . Additionally, there are also evidence of reduced type I (IFNβ) and III (IFNλ) interferon production due to type 2-skewed inflammation, which contributes to imperfect clearance of the virus resulting in persistence of viral components, or the live virus in the airway epithelium (Contoli et al., 2006; Hwang et al., 2019; Wark, 2019) . Due to the viral components remaining in the airway, antiviral genes such as type I interferons, inflammasome activating factors and cytokines remained activated resulting in prolong airway inflammation (Wood et al., 2011; Essaidi-Laziosi et al., 2018) . These factors enhance granulocyte infiltration thus prolonging the exacerbation symptoms. Such persistent inflammation may also be found within DNA viruses such as AdV, hCMV and HSV, whose infections generally persist longer (Imperiale and Jiang, 2015) , further contributing to chronic activation of inflammation when they infect the airway (Yang et al., 2008; Morimoto et al., 2009; Imperiale and Jiang, 2015; Lan et al., 2016; Tan et al., 2016; Kowalski et al., 2017) . With that note, human papilloma virus (HPV), a DNA virus highly associated with head and neck cancers and respiratory papillomatosis, is also linked with the chronic inflammation that precedes the malignancies (de Visser et al., 2005; Gillison et al., 2012; Bonomi et al., 2014; Fernandes et al., 2015) . Therefore, the role of HPV infection in causing chronic inflammation in the airway and their association to exacerbations of chronic airway inflammatory diseases, which is scarcely explored, should be investigated in the future. Furthermore, viral persistence which lead to continuous expression of antiviral genes may also lead to the development of steroid resistance, which is seen with RV, RSV, and PIV infection (Chi et al., 2011; Ford et al., 2013; Papi et al., 2013) . The use of steroid to suppress the inflammation may also cause the virus to linger longer in the airway due to the lack of antiviral clearance (Kim et al., 2008; Hammond et al., 2015; Hewitt et al., 2016; McKendry et al., 2016; Singanayagam et al., 2019b) . The concomitant development of steroid resistance together with recurring or prolong viral infection thus added considerable burden to the management of acute exacerbation, which should be the future focus of research to resolve the dual complications arising from viral infection. On the other end of the spectrum, viruses that induce strong type 1 inflammation and cell death such as IFV (Yan et al., 2016; Guibas et al., 2018) and certain CoV (including the recently emerged COVID-19 virus) (Tao et al., 2013; Yue et al., 2018; Zhu et al., 2020) , may not cause prolonged inflammation due to strong induction of antiviral clearance. These infections, however, cause massive damage and cell death to the epithelial barrier, so much so that areas of the epithelium may be completely absent post infection (Yan et al., 2016; Tan et al., 2019) . Factors such as RANTES and CXCL10, which recruit immune cells to induce apoptosis, are strongly induced from IFV infected epithelium (Ampomah et al., 2018; Tan et al., 2019) . Additionally, necroptotic factors such as RIP3 further compounds the cell deaths in IFV infected epithelium . The massive cell death induced may result in worsening of the acute exacerbation due to the release of their cellular content into the airway, further evoking an inflammatory response in the airway (Guibas et al., 2018) . Moreover, the destruction of the epithelial barrier may cause further contact with other pathogens and allergens in the airway which may then prolong exacerbations or results in new exacerbations. Epithelial destruction may also promote further epithelial remodeling during its regeneration as viral infection induces the expression of remodeling genes such as MMPs and growth factors . Infections that cause massive destruction of the epithelium, such as IFV, usually result in severe acute exacerbations with non-classical symptoms of chronic airway inflammatory diseases. Fortunately, annual vaccines are available to prevent IFV infections (Vasileiou et al., 2017; Zheng et al., 2018) ; and it is recommended that patients with chronic airway inflammatory disease receive their annual influenza vaccination as the best means to prevent severe IFV induced exacerbation. \n\nAnother mechanism that viral infections may use to drive acute exacerbations is the induction of vasodilation or tight junction opening factors which may increase the rate of infiltration. Infection with a multitude of respiratory viruses causes disruption of tight junctions with the resulting increased rate of viral infiltration. This also increases the chances of allergens coming into contact with airway immune cells. For example, IFV infection was found to induce oncostatin M (OSM) which causes tight junction opening (Pothoven et al., 2015; Tian et al., 2018) . Similarly, RV and RSV infections usually cause tight junction opening which may also increase the infiltration rate of eosinophils and thus worsening of the classical symptoms of chronic airway inflammatory diseases (Sajjan et al., 2008; Kast et al., 2017; Kim et al., 2018) . In addition, the expression of vasodilating factors and fluid homeostatic factors such as angiopoietin-like 4 (ANGPTL4) and bactericidal/permeabilityincreasing fold-containing family member A1 (BPIFA1) are also associated with viral infections and pneumonia development, which may worsen inflammation in the lower airway Akram et al., 2018) . These factors may serve as targets to prevent viral-induced exacerbations during the management of acute exacerbation of chronic airway inflammatory diseases. Another recent area of interest is the relationship between asthma and COPD exacerbations and their association with the airway microbiome. The development of chronic airway inflammatory diseases is usually linked to specific bacterial species in the microbiome which may thrive in the inflamed airway environment (Diver et al., 2019) . In the event of a viral infection such as RV infection, the effect induced by the virus may destabilize the equilibrium of the microbiome present (Molyneaux et al., 2013; Kloepfer et al., 2014; Kloepfer et al., 2017; Jubinville et al., 2018; van Rijn et al., 2019) . In addition, viral infection may disrupt biofilm colonies in the upper airway (e.g., Streptococcus pneumoniae) microbiome to be release into the lower airway and worsening the inflammation (Marks et al., 2013; Chao et al., 2014) . Moreover, a viral infection may also alter the nutrient profile in the airway through release of previously inaccessible nutrients that will alter bacterial growth (Siegel et al., 2014; Mallia et al., 2018) . Furthermore, the destabilization is further compounded by impaired bacterial immune response, either from direct viral influences, or use of corticosteroids to suppress the exacerbation symptoms (Singanayagam et al., 2018 (Singanayagam et al., , 2019a Wang et al., 2018; Finney et al., 2019) . All these may gradually lead to more far reaching effect when normal flora is replaced with opportunistic pathogens, altering the inflammatory profiles (Teo et al., 2018) . These changes may in turn result in more severe and frequent acute exacerbations due to the interplay between virus and pathogenic bacteria in exacerbating chronic airway inflammatory diseases (Wark et al., 2013; Singanayagam et al., 2018) . To counteract these effects, microbiome-based therapies are in their infancy but have shown efficacy in the treatments of irritable bowel syndrome by restoring the intestinal microbiome (Bakken et al., 2011) . Further research can be done similarly for the airway microbiome to be able to restore the microbiome following disruption by a viral infection. Viral infections can cause the disruption of mucociliary function, an important component of the epithelial barrier. Ciliary proteins FIGURE 2 | Changes in the upper airway epithelium contributing to viral exacerbation in chronic airway inflammatory diseases. The upper airway epithelium is the primary contact/infection site of most respiratory viruses. Therefore, its infection by respiratory viruses may have far reaching consequences in augmenting and synergizing current and future acute exacerbations. The destruction of epithelial barrier, mucociliary function and cell death of the epithelial cells serves to increase contact between environmental triggers with the lower airway and resident immune cells. The opening of tight junction increasing the leakiness further augments the inflammation and exacerbations. In addition, viral infections are usually accompanied with oxidative stress which will further increase the local inflammation in the airway. The dysregulation of inflammation can be further compounded by modulation of miRNAs and epigenetic modification such as DNA methylation and histone modifications that promote dysregulation in inflammation. Finally, the change in the local airway environment and inflammation promotes growth of pathogenic bacteria that may replace the airway microbiome. Furthermore, the inflammatory environment may also disperse upper airway commensals into the lower airway, further causing inflammation and alteration of the lower airway environment, resulting in prolong exacerbation episodes following viral infection. Viral specific trait contributing to exacerbation mechanism (with literature evidence) Oxidative stress ROS production (RV, RSV, IFV, HSV)\n\nAs RV, RSV, and IFV were the most frequently studied viruses in chronic airway inflammatory diseases, most of the viruses listed are predominantly these viruses. However, the mechanisms stated here may also be applicable to other viruses but may not be listed as they were not implicated in the context of chronic airway inflammatory diseases exacerbation (see text for abbreviations). that aid in the proper function of the motile cilia in the airways are aberrantly expressed in ciliated airway epithelial cells which are the major target for RV infection (Griggs et al., 2017) . Such form of secondary cilia dyskinesia appears to be present with chronic inflammations in the airway, but the exact mechanisms are still unknown (Peng et al., , 2019 Qiu et al., 2018) . Nevertheless, it was found that in viral infection such as IFV, there can be a change in the metabolism of the cells as well as alteration in the ciliary gene expression, mostly in the form of down-regulation of the genes such as dynein axonemal heavy chain 5 (DNAH5) and multiciliate differentiation And DNA synthesis associated cell cycle protein (MCIDAS) (Tan et al., 2018b . The recently emerged Wuhan CoV was also found to reduce ciliary beating in infected airway epithelial cell model (Zhu et al., 2020) . Furthermore, viral infections such as RSV was shown to directly destroy the cilia of the ciliated cells and almost all respiratory viruses infect the ciliated cells (Jumat et al., 2015; Yan et al., 2016; Tan et al., 2018a) . In addition, mucus overproduction may also disrupt the equilibrium of the mucociliary function following viral infection, resulting in symptoms of acute exacerbation (Zhu et al., 2009) . Hence, the disruption of the ciliary movement during viral infection may cause more foreign material and allergen to enter the airway, aggravating the symptoms of acute exacerbation and making it more difficult to manage. The mechanism of the occurrence of secondary cilia dyskinesia can also therefore be explored as a means to limit the effects of viral induced acute exacerbation. MicroRNAs (miRNAs) are short non-coding RNAs involved in post-transcriptional modulation of biological processes, and implicated in a number of diseases (Tan et al., 2014) . miRNAs are found to be induced by viral infections and may play a role in the modulation of antiviral responses and inflammation (Gutierrez et al., 2016; Deng et al., 2017; Feng et al., 2018) . In the case of chronic airway inflammatory diseases, circulating miRNA changes were found to be linked to exacerbation of the diseases (Wardzynska et al., 2020) . Therefore, it is likely that such miRNA changes originated from the infected epithelium and responding immune cells, which may serve to further dysregulate airway inflammation leading to exacerbations. Both IFV and RSV infections has been shown to increase miR-21 and augmented inflammation in experimental murine asthma models, which is reversed with a combination treatment of anti-miR-21 and corticosteroids (Kim et al., 2017) . IFV infection is also shown to increase miR-125a and b, and miR-132 in COPD epithelium which inhibits A20 and MAVS; and p300 and IRF3, respectively, resulting in increased susceptibility to viral infections (Hsu et al., 2016 (Hsu et al., , 2017 . Conversely, miR-22 was shown to be suppressed in asthmatic epithelium in IFV infection which lead to aberrant epithelial response, contributing to exacerbations (Moheimani et al., 2018) . Other than these direct evidence of miRNA changes in contributing to exacerbations, an increased number of miRNAs and other non-coding RNAs responsible for immune modulation are found to be altered following viral infections (Globinska et al., 2014; Feng et al., 2018; Hasegawa et al., 2018) .\n\nQuestion: What is recommended that patients with chronic airway inflammatory disease?",
"role": "user"
},
{
"content": "receive their annual influenza vaccination as the best means to prevent severe IFV induced exacerbation.",
"role": "assistant"
}
] |
science.anat_em_ner
|
science.anat_em_ner.240
|
[
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"AnatomicalEntity\"].\n\nPlease return the output as a JSON object of the format: {\"AnatomicalEntity\": [\"tumor cells\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nTherapeutic angiogenesis: a complex problem requiring a sophisticated approach.\nBlood and vascular disorders underlie a plethora of pathologic conditions and are the single most frequent cause of human disease. Ischemia, involving restricted blood flow to tissues is the most common consequence of vessel dysfunction resulting in the disruption of oxygen and nutrient delivery and the accumulation of waste metabolites. Cells cannot survive extended severe ischemia but may be able to adapt to a moderate condition where diffusion to and from bordering nonischemic regions sustains vital functions. Under this condition, the secondary functions of effected cells are likely to be impaired, and a new metabolic equilibrium is established, determined by the level of cross-diffusion and degree of hypoxia. In tissues with a normally high metabolic turnover such as skeletal and cardiac muscle, even mild ischemia causes hypoxia, acidosis, and depressed function (contractility) and eventually threatens myocyte viability and organ function. Ischemic cardiac muscle is additionally vulnerable because reperfusion is essential for survival but reperfusion itself poses additional stress principally from increased production of free radicals during reoxygenation. The latter effect is called reperfusion injury and can cause as much damage as the ischemia. The treatment possibilities for ischemia-related vascular disease are limited. Lipid/cholesterol-lowering agents, diet and antiplatelet adherence (aspirin) therapy may help slow the progression of vessel disease in some instances; but surgical reconstruction may be the only option in advanced stages, and even this is not always an option. An alternative and rather obvious strategy to treat ischemia is to activate endogenous angiogenic or arteriogenic pathways to stimulate revascularization of the tissue. The feasibility of such a strategy has now been established through the results of studies over the past decade, and a new discipline called therapeutic angiogenesis has emerged. This review focuses on the application of therapeutic angiogenesis for treating ischemic muscle disease and includes a critical evaluation of the parameters and limitations of current procedures. The development of this technology has benefited from its application to both peripheral and coronary artery disease and results from both are reviewed here.",
"role": "user"
},
{
"content": "{\"AnatomicalEntity\": [\"organ\", \"Blood\", \"tissues\", \"myocyte\", \"coronary artery\", \"vascular\", \"Cells\", \"platelet\", \"vessel\", \"tissue\", \"blood\", \"cells\", \"skeletal\", \"peripheral\", \"cardiac muscle\", \"muscle\"]}",
"role": "assistant"
}
] |
science.chia_ner
|
science.chia_ner.1497
|
[
{
"content": "You will be shown inclusion or exclusion criteria for a clinical trial. Given these criteria, your task is to extract all unique entities of the following types: [\"Observation\", \"Condition\", \"Procedure\", \"Drug\", \"Temporal\", \"Multiplier\", \"Qualifier\", \"Negation\", \"ReferencePoint\", \"Scope\", \"Value\", \"Mood\", \"Device\", \"Visit\", \"Measurement\", \"Person\"].\n\nPlease return the output as a JSON object of the format: {\"Condition\" : [\"breast cancer\", ...], \"Device\" : [\"ports\", ...], \"Drug\" : [\"Atenolol\", ...], \"Measurement\" : [\"blood pressure\", ...], \"Mood\" : [\"suspected\", ...], \"Multiplier\" : [\"more than two\", ...], \"Negation\" : [\"no\", ...], \"Observation\" : [\"history\", ...], \"Person\" : [\"male\", ...], \"Procedure\" : [\"surgery\", ...], \"Qualifier\" : [\"severe\", ...], \"ReferencePoint\" : [\"enrollment\", ...], \"Scope\" : [\"infection\", ...], \"Temporal\" : [\"one day\", ...], \"Value\" : [\"50%\", ...], \"Visit\" : [\"ICU\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nFemale age 20-50 y/o who plan to undergo abdominal myomectomy for symptomatic myomatous uterus",
"role": "user"
},
{
"content": "{\"Condition\": [\"myomatous uterus\"], \"Device\": [], \"Drug\": [], \"Measurement\": [], \"Mood\": [\"plan to undergo\"], \"Multiplier\": [], \"Negation\": [], \"Observation\": [], \"Person\": [\"Female\", \"age\"], \"Procedure\": [\"abdominal myomectomy\"], \"Qualifier\": [\"symptomatic\"], \"ReferencePoint\": [], \"Scope\": [], \"Temporal\": [], \"Value\": [\"20-50 y/o\"], \"Visit\": []}",
"role": "assistant"
}
] |
science.data_reco_mcq_sc
|
science.data_reco_mcq_sc.12302
|
[
{
"content": "Given a short query with keyphrases about a research idea in 2019 in the domain of artificial intelligence, recommend the relevant datasets to validate the research idea from options provided. A short description of the datasets is also provided to help you.\n\nQuery: I want to build a supervised model to detect novel objects from only a few annotated examples.\n\nKeyphrases: object detection images\n\nDataset description:\nCOCO: The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\nFSOD: Few-Shot Object Detection Dataset (FSOD) is a high-diverse dataset specifically designed for few-shot object detection and intrinsically designed to evaluate thegenerality of a model on novel categories.\nDublinCity: A novel benchmark dataset that includes a manually annotated point cloud for over 260 million laser scanning points into 100'000 (approx.) assets from Dublin LiDAR point cloud [12] in 2015.\nFewRel 2.0: A more challenging task to investigate two aspects of few-shot relation classification models: (1) Can they adapt to a new domain with only a handful of instances? (2) Can they detect none-of-the-above (NOTA) relations?\n\nOptions:- A: COCO B: FSOD C: DublinCity D: FewRel 2.0",
"role": "user"
},
{
"content": "COCO",
"role": "assistant"
}
] |
science.nlmgene_ner
|
science.nlmgene_ner.404
|
[
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Gene\"].\n\nPlease return the output as a JSON object of the format: {\"Gene\": [\"IGF\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nTransmembrane-Bound IL-15-Promoted Epithelial-Mesenchymal Transition in Renal Cancer Cells Requires the Src-Dependent Akt/GSK-3beta/beta-Catenin Pathway. Intrarenal interleukin-15 (IL-15) plays a major role controlling epithelial survival and polarization both in physiological and pathologic conditions. Herein, we confirmed that human renal cell carcinomas (RCCs) express a membrane-bound IL-15 isoform displaying an unusual molecular weight of 27 kDa. Its stimulation with soluble IL-15 receptor alpha chain (s-IL-15Ralpha) triggers epithelial-mesenchymal transition (EMT) process as shown by the down-regulation of E-cadherin and zona occludens 1 and the up-regulation of vimentin and N-cadherin and promotes the migratory and invasive properties of RCC. S-IL-15Ralpha treatment triggered the Src/PI3K/Akt/GSK-3beta pathway and promoted beta-catenin nuclei translocation. Deactivation of this pathway by using Src-specific inhibitor PP2, PI3K inhibitor LY294002, and AKT inhibitor MK2206 hampered beta-catenin nuclei translocation and suppressed EMT, migration, and invasion of RCC. S-IL-15Ralpha treatment also enhanced Src-dependent phosphorylation of focal adhesion kinase (FAK) and extracellular signal-regulated kinase (Erk1/2). FAK knockdown significantly decreased the migration and invasion of RCC, which suggest that Src-FAK signaling was involved in s-IL-15Ralpha-favored migration and invasion of RCC. At the same time, inhibitors of Erk1/2 also significantly decreased the migration and invasion of RCC but could not reverse s-IL-15Ralpha-induced EMT. Taken together, our results reveal that Src-dependent PI3K/Akt/GSK3b/beta-catenin pathway is required for s-IL-15Ra-dependent induction of EMT in RCC, while Src-FAK and Src-Erk1/2 signaling were involved in s-IL-15Ralpha-promoted migration and invasion properties of RCC. Our study provides a better understanding of IL-15 signaling in RCC tumor progression, which may lead to novel targeted therapies and provide some suggestions when using IL-15 in clinic.",
"role": "user"
},
{
"content": "{\"Gene\": [\"GSK3b\", \"beta-Catenin\", \"AKT\", \"GSK-3beta\", \"N-cadherin\", \"focal adhesion kinase\", \"IL-15 receptor alpha\", \"IL-15\", \"beta-catenin\", \"Src\", \"Erk1/2\", \"extracellular signal-regulated kinase\", \"interleukin-15\", \"FAK\", \"IL-15Ra\", \"PI3K\", \"E-cadherin\", \"zona occludens 1\", \"Akt\", \"vimentin\", \"IL-15Ralpha\"]}",
"role": "assistant"
}
] |
science.data_reco_mcq_mc
|
science.data_reco_mcq_mc.219
|
[
{
"content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: I am interested in implementing a system that will remove all the dynamic elements from an image (\n\nKeyphrases: image inpainting images\n\nDataset description:\nCARLA: CARLA (CAR Learning to Act) is an open simulator for urban driving, developed as an open-source layer over Unreal Engine 4. Technically, it operates similarly to, as an open source layer over Unreal Engine 4 that provides sensors in the form of RGB cameras (with customizable positions), ground truth depth maps, ground truth semantic segmentation maps with 12 semantic classes designed for driving (road, lane marking, traffic sign, sidewalk and so on), bounding boxes for dynamic objects in the environment, and measurements of the agent itself (vehicle location and orientation).\nCommonsenseQA: The CommonsenseQA is a dataset for commonsense question answering task. The dataset consists of 12,247 questions with 5 choices each.\nCityscapes: Cityscapes is a large-scale database which focuses on semantic understanding of urban street scenes. It provides semantic, instance-wise, and dense pixel annotations for 30 classes grouped into 8 categories (flat surfaces, humans, vehicles, constructions, objects, nature, sky, and void).\nWebText: WebText is an internal OpenAI corpus created by scraping web pages with emphasis on document quality. The authors scraped all outbound links from Reddit which received at least 3 karma.\nPlaces: The Places dataset is proposed for scene recognition and contains more than 2.5 million images covering more than 205 scene categories with more than 5,000 images per category.\nConvAI2: The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset.\nBanglaLekhaImageCaptions: This dataset consists of images and annotations in Bengali. The images are human annotated in Bengali by two adult native Bengali speakers.\n\nOptions:- A: CARLA B: CommonsenseQA C: Cityscapes D: WebText E: Places F: ConvAI2 G: BanglaLekhaImageCaptions",
"role": "user"
},
{
"content": "[\"CARLA\", \"Cityscapes\", \"Places\"]",
"role": "assistant"
}
] |
science.data_reco_mcq_sc
|
science.data_reco_mcq_sc.8284
|
[
{
"content": "Given a short query with keyphrases about a research idea in 2019 in the domain of artificial intelligence, recommend the relevant datasets to validate the research idea from options provided. A short description of the datasets is also provided to help you.\n\nQuery: A novel skeleton-based action recognition method is proposed to solve the two problems caused by the existing methods\n\nKeyphrases: action recognition skeleton\n\nDataset description:\nUCF101: UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports).\nUTD-MHAD: The UTD-MHAD dataset consists of 27 different actions performed by 8 subjects. Each subject repeated the action for 4 times, resulting in 861 action sequences in total.\nG3D: The Gaming 3D Dataset (G3D) focuses on real-time action recognition in a gaming scenario. It contains 10 subjects performing 20 gaming actions: “punch right”, “punch left”, “kick right”, “kick left”, “defend”, “golf swing”, “tennis swing forehand”, “tennis swing backhand”, “tennis serve”, “throw bowling ball”, “aim and fire gun”, “walk”, “run”, “jump”, “climb”, “crouch”, “steer a car”, “wave”, “flap” and “clap”.\nPKU-MMD: The PKU-MMD dataset is a large skeleton-based action detection dataset. It contains 1076 long untrimmed video sequences performed by 66 subjects in three camera views.\n\nOptions:- A: UCF101 B: UTD-MHAD C: G3D D: PKU-MMD",
"role": "user"
},
{
"content": "UCF101",
"role": "assistant"
}
] |
science.scitldr_aic
|
science.scitldr_aic.988
|
[
{
"content": "Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.\n\nPaper text:\n\nKey equatorial climate phenomena such as QBO and ENSO have never been adequately explained as deterministic processes.\nThis in spite of recent research showing growing evidence of predictable behavior.\nThis study applies the fundamental Laplace tidal equations with simplifying assumptions along the equator — i.e. no Coriolis force and a small angle approximation.\nThe solutions to the partial differential equations are highly non-linear related to Navier-Stokes and only search approaches can be used to fit to the data.",
"role": "user"
},
{
"content": "Analytical Formulation of Equatorial Standing Wave Phenomena: Application to QBO and ENSO",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization
|
science.multixscience_multidoc_summarization.26049
|
[
{
"content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nAnswer Set Programming (ASP) is a popular framework for modelling combinatorial problems. However, ASP cannot be used easily for reasoning about uncertain information. Possibilistic ASP (PASP) is an extension of ASP that combines possibilistic logic and ASP. In PASP a weight is associated with each rule, whereas this weight is interpreted as the certainty with which the conclusion can be established when the body is known to hold. As such, it allows us to model and reason about uncertain information in an intuitive way. In this paper we present new semantics for PASP in which rules are interpreted as constraints on possibility distributions. Special models of these constraints are then identified as possibilistic answer sets. In addition, since ASP is a special case of PASP in which all the rules are entirely certain, we obtain a new characterization of ASP in terms of constraints on possibility distributions. This allows us to uncover a new form of disjunction, called weak disjunction, that has not been previously considered in the literature. In addition to introducing and motivating the semantics of weak disjunction, we also pinpoint its computational complexity. In particular, while the complexity of most reasoning tasks coincides with standard disjunctive ASP, we find that brave reasoning for programs with weak disjunctions is easier.\n\nCited Abstract(s):\n[1]: The paper proposes a new knowledge representation language, called DLP<, which extends disjunctive logic programming (with strong negation) by inheritance. The addition of inheritance enhances the knowledge modeling features of the language providing a natural representation of default reasoning with exceptions. A declarative model-theoretic semantics of DLP< is provided, which is shown to generalize the Answer Set Semantics of disjunctive logic programs. The knowledge modeling features of the language are illustrated by encoding classical nonmonotonic problems in DLP<. The complexity of DLP< is analyzed, proving that inheritance does not cause any computational overhead, as reasoning in DLP< has exactly the same complexity as reasoning in disjunctive logic programming. This is confirmed by the existence of an efficient translation from DLP< to plain disjunctive logic programming. Using this translation, an advanced KR system supporting the DLP< language has been implemented on top of the DLV system and has subsequently been integrated into DLV.",
"role": "user"
},
{
"content": "Not all existing extensions of disjunction allow non-minimal models. For example, in [1] an extension of disjunctive logic programs is presented which adds the idea of inheritance. Conflicts between rules are then resolved in favor of more specific rules. Such an approach allows for an intuitive way to deal with default reasoning and exceptions. In particular, the semantics allow for rules to be marked as being defeasible and allows to specify an order or inheritance tree among (sets of) rules. Interestingly, the complexity of the resulting system is not affected and coincides with the complexity of ordinary disjunctive programs.",
"role": "assistant"
}
] |
science.medmentions_ner
|
science.medmentions_ner.2356
|
[
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"HealthCareActivity\", \"InjuryOrPoisoning\", \"BodySubstance\", \"IntellectualProduct\", \"AnatomicalStructure\", \"SpatialConcept\", \"Chemical\", \"Bacterium\", \"MedicalDevice\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Finding\", \"BiologicFunction\", \"Virus\", \"ResearchActivity\", \"ClinicalAttribute\", \"PopulationGroup\", \"Eukaryote\", \"BodySystem\", \"Food\", \"ProfessionalOrOccupationalGroup\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nSupplementary Health Insurance from the consumer point of view: Are Israelis consumers doing an informed rational choice when purchasing Supplementary Health Insurance? The National Health Insurance Law in Israel ensures basic health basket eligibility for all its citizens. A supplemental health insurance plan (SHIP) is offered for an additional fee. Over the years, the percentage of supplemental insurance's holders has risen considerably, ranking among the highest in OECD countries. The assumption that consumers implement an informed rational choice based on relevant information is doubtful. Are consumers sufficiently well informed to make market processes work well? To examine perspectives, preferences and knowledge of Israelis in relation to SHIP. A telephone survey was conducted with a representative sample of the Israeli adult population. 703 interviews were completed. The response rate was 50.3%. 85% of the sample reported possessing SHIP. This survey found that most of the Israeli public parched additional insurance coverage however did not show a significant knowledge about the benefits provided by the supplementary insurance, at least in the three measurements used in this study. The scope of SHIP acquisition is very broad and cannot be explained in economic terms alone. Acquiring SHIP became a default option rather than an active decision. It is time to review the goals, achievements and side effects of SHIP and to create new policy for the future.",
"role": "user"
},
{
"content": "{\"AnatomicalStructure\": [], \"Bacterium\": [], \"BiologicFunction\": [\"point of view\", \"active decision\", \"response\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [], \"ClinicalAttribute\": [], \"Eukaryote\": [], \"Finding\": [\"did not show\"], \"Food\": [], \"HealthCareActivity\": [\"insurance\\u0027s\", \"basic health basket\", \"Health Insurance\", \"insurance\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"survey\", \"knowledge\", \"policy\", \"goals\"], \"MedicalDevice\": [], \"Organization\": [\"OECD\"], \"PopulationGroup\": [\"Israeli\", \"consumers\", \"population\", \"sample\", \"Israelis\", \"citizens\", \"public\", \"consumer\"], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"study\"], \"SpatialConcept\": [\"countries\", \"Israel\"], \"Virus\": []}",
"role": "assistant"
}
] |
science.data_reco_mcq_mc
|
science.data_reco_mcq_mc.161
|
[
{
"content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: We present a baseline algorithm for the detection of diabetic retinopathy lesions, which is based on\n\nKeyphrases: lesion detection medical images\n\nDataset description:\nMiddlebury: The Middlebury Stereo dataset consists of high-resolution stereo sequences with complex geometry and pixel-accurate ground-truth disparity data. The ground-truth disparities are acquired using a novel technique that employs structured lighting and does not require the calibration of the light projectors.\nDRIVE: The Digital Retinal Images for Vessel Extraction (DRIVE) dataset is a dataset for retinal vessel segmentation. It consists of a total of JPEG 40 color fundus images; including 7 abnormal pathology cases.\nIDRiD: Indian Diabetic Retinopathy Image Dataset (IDRiD) dataset consists of typical diabetic retinopathy lesions and normal retinal structures annotated at a pixel level. This dataset also provides information on the disease severity of diabetic retinopathy and diabetic macular edema for each image.\nADAM: ADAM is organized as a half day Challenge, a Satellite Event of the ISBI 2020 conference in Iowa City, Iowa, USA. The ADAM challenge focuses on the investigation and development of algorithms associated with the diagnosis of Age-related Macular degeneration (AMD) and segmentation of lesions in fundus photos from AMD patients.\nDiabetic Retinopathy Detection Dataset: A large scale of retina image dataset.\n\nOptions:- A: Middlebury B: DRIVE C: IDRiD D: ADAM E: Diabetic Retinopathy Detection Dataset",
"role": "user"
},
{
"content": "[\"Middlebury\", \"DRIVE\"]",
"role": "assistant"
}
] |
science.medmentions_ner
|
science.medmentions_ner.10
|
[
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"HealthCareActivity\", \"InjuryOrPoisoning\", \"BodySubstance\", \"IntellectualProduct\", \"AnatomicalStructure\", \"SpatialConcept\", \"Chemical\", \"Bacterium\", \"MedicalDevice\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Finding\", \"BiologicFunction\", \"Virus\", \"ResearchActivity\", \"ClinicalAttribute\", \"PopulationGroup\", \"Eukaryote\", \"BodySystem\", \"Food\", \"ProfessionalOrOccupationalGroup\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nApplication of an Analytical Solution as a Screening Tool for Sea Water Intrusion Sea water intrusion into aquifers is problematic in many coastal areas. The physics and chemistry of this issue are complex, and sea water intrusion remains challenging to quantify. Simple assessment tools like analytical models offer advantages of rapid application, but their applicability to field situations is unclear. This study examines the reliability of a popular sharp-interface analytical approach for estimating the extent of sea water in a homogeneous coastal aquifer subjected to pumping and regional flow effects and under steady-state conditions. The analytical model is tested against observations from Canada, the United States, and Australia to assess its utility as an initial approximation of sea water extent for the purposes of rapid groundwater management decision making. The occurrence of sea water intrusion resulting in increased salinity at pumping wells was correctly predicted in approximately 60% of cases. Application of a correction to account for dispersion did not markedly improve the results. Failure of the analytical model to provide correct predictions can be attributed to mismatches between its simplifying assumptions and more complex field settings. The best results occurred where the toe of the salt water wedge is expected to be the closest to the coast under predevelopment conditions. Predictions were the poorest for aquifers where the salt water wedge was expected to extend further inland under predevelopment conditions and was therefore more dispersive prior to pumping. Sharp-interface solutions remain useful tools to screen for the vulnerability of coastal aquifers to sea water intrusion, although the significant sources of uncertainty identified in this study require careful consideration to avoid misinterpreting sharp-interface results.",
"role": "user"
},
{
"content": "{\"AnatomicalStructure\": [], \"Bacterium\": [], \"BiologicFunction\": [\"decision making\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"salt water\"], \"ClinicalAttribute\": [], \"Eukaryote\": [], \"Finding\": [\"salinity\", \"problematic\", \"improve\"], \"Food\": [], \"HealthCareActivity\": [\"approximation\", \"screen\", \"Screening\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"tools\", \"analytical model\", \"assessment tools\", \"analytical models\", \"Tool\"], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"study\"], \"SpatialConcept\": [\"coast\", \"coastal areas\", \"Australia\", \"wedge\", \"inland\", \"groundwater\", \"United States\", \"dispersion\", \"coastal\", \"Canada\"], \"Virus\": []}",
"role": "assistant"
}
] |
science.scitldr_aic
|
science.scitldr_aic.858
|
[
{
"content": "Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.\n\nPaper text:\n\nIn this paper, we explore new approaches to combining information encoded within the learned representations of autoencoders.\nWe explore models that are capable of combining the attributes of multiple inputs such that a resynthesised output is trained to fool an adversarial discriminator for real versus synthesised data.\nFurthermore, we explore the use of such an architecture in the context of semi-supervised learning, where we learn a mixing function whose objective is to produce interpolations of hidden states, or masked combinations of latent representations that are consistent with a conditioned class label.\nWe show quantitative and qualitative evidence that such a formulation is an interesting avenue of research.\nThe autoencoder is a fundamental building block in unsupervised learning.\nAutoencoders are trained to reconstruct their inputs after being processed by two neural networks: an encoder which encodes the input to a high-level representation or bottleneck, and a decoder which performs the reconstruction using the representation as input.\nOne primary goal of the autoencoder is to learn representations of the input data which are useful BID1 , which may help in downstream tasks such as classification BID27 BID9 or reinforcement learning BID20 BID5 .\nThe representations of autoencoders can be encouraged to contain more 'useful' information by restricting the size of the bottleneck, through the use of input noise (e.g., in denoising autoencoders, BID23 , through regularisation of the encoder function BID17 , or by introducing a prior BID11 .\nAnother goal is in learning interpretable representations BID3 BID10 .\nIn unsupervised learning, learning often involves qualitative objectives on the representation itself, such as disentanglement of latent variables BID12 or maximisation of mutual information BID3 BID0 BID8 .Mixup\nBID26 and manifold mixup BID21 are regularisation techniques that encourage deep neural networks to behave linearly between two data samples. These\nmethods artificially augment the training set by producing random convex combinations between pairs of examples and their corresponding labels and training the network on these combinations. This\nhas the effect of creating smoother decision boundaries, which can have a positive effect on generalisation performance. In BID21\n, the random convex combinations are computed in the hidden space of the network. This procedure\ncan be viewed as using the high-level representation of the network to produce novel training examples and provides improvements over strong baselines in the supervised learning. Furthermore, BID22\npropose a simple and efficient method for semi-supervised classification based on random convex combinations between unlabeled samples and their predicted labels.In this paper we explore the use of a wider class of mixing functions for unsupervised learning, mixing in the bottleneck layer of an autoencoder. These mixing functions\ncould consist of continuous interpolations between latent vectors such as in BID21 , to binary masking operations to even a deep neural network which learns the mixing operation. In order to ensure that\nthe output of the decoder given the mixed representation resembles the data distribution at the pixel level, we leverage adversarial learning BID4 , where here we train a discriminator to distinguish between decoded mixed and unmixed representations. This technique affords\na model the ability to simulate novel data points (such as those corresponding to combinations of annotations not present in the training set). Furthermore, we explore\nour approach in the context of semi-supervised learning, where we learn a mixing function whose objective is to produce interpolations of hidden states consistent with a conditioned class label.\nIn this paper, we proposed the adversarial mixup resynthesiser and showed that it can be used to produce realistic-looking combinations of examples by performing mixing in the bottleneck of an autoencoder.\nWe proposed several mixing functions, including one based on sampling from a uniform distribution and the other a Bernoulli distribution.\nFurthermore, we presented a semisupervised version of the Bernoulli variant in which one can leverage class labels to learn a mixing function which can determine what parts of the latent code should be mixed to produce an image consistent with a desired class label.\nWhile our technique can be used to leverage an autoencoder as a generative model, we conjecture that our technique may have positive effects on the latent representation and therefore downstream tasks, though this is yet to be substantiated.\nFuture work will involve more comparisons to existing literature and experiments to determine the effects of mixing on the latent space itself and downstream tasks.",
"role": "user"
},
{
"content": "We leverage deterministic autoencoders as generative models by proposing mixing functions which combine hidden states from pairs of images. These mixes are made to look realistic through an adversarial framework.",
"role": "assistant"
}
] |
science.covid_deepset_qa
|
science.covid_deepset_qa.1728
|
[
{
"content": "You will be shown a paragraph from a biomedical research paper, followed by a question about the paper. Please extract text from the paragraph to answer the question. Do not include any text in your repsonse other than the answer.\n\nParagraph: The\nmodelled deaths are informed by an infection-to-onset distribution ( time from infection to the onset\nof symptoms ), an onset-to-death distribution (time from the onset of symptoms to death), and the\npopulation-averaged infection fatality ratio (adjusted for the age structure and contact patterns of\neach country, see Appendix). Given these distributions and ratios, modelled deaths are a function of\nthe number of infections. The modelled number of infections is informed by the serial interval\ndistribution (the average time from infection of one person to the time at which they infect another)\nand the time-varying reproduction number. Finally, the time-varying reproduction number is a\nfunction of the initial reproduction number before interventions and the effect sizes from\ninterventions. Figure 5: Summary of model components. Following the hierarchy from bottom to top gives us a full framework to see how interventions affect\ninfections, which can result in deaths. We use Bayesian inference to ensure our modelled deaths can\nreproduce the observed deaths as closely as possible. From bottom to top in Figure 5, there is an\nimplicit lag in time that means the effect of very recent interventions manifest weakly in current\ndeaths (and get stronger as time progresses). To maximise the ability to observe intervention impact\non deaths, we fit our model jointly for all 11 European countries, which results in a large data set. Our\nmodel jointly estimates the effect sizes of interventions. We have evaluated the effect ofour Bayesian\nprior distribution choices and evaluate our Bayesian posterior calibration to ensure our results are\nstatistically robust (Appendix 8.4). 7 Acknowledgements\n\nInitial research on covariates in Appendix 8.6 was crowdsourced; we thank a number of people\nacross the world for help with this. This work was supported by Centre funding from the UK Medical\nResearch Council under a concordat with the UK Department for International Development, the\nNIHR Health Protection Research Unit in Modelling Methodology and CommunityJameel. 8 Appendix: Model Specifics, Validation and Sensitivity Analysis\n8.1 Death model\n\nWe observe daily deaths Dam for days t E 1, ...,n and countries m E 1, ...,p. These daily deaths are\nmodelled using a positive real-Valued function dam = E(Dam) that represents the expected number\nof deaths attributed to COVID-19. Dam is assumed to follow a negative binomial distribution with\n\n\nThe expected number of deaths (1 in a given country on a given day is a function of the number of\ninfections C occurring in previous days. At the beginning of the epidemic, the observed deaths in a country can be dominated by deaths that\nresult from infection that are not locally acquired. To avoid biasing our model by this, we only include\nobserved deaths from the day after a country has cumulatively observed 10 deaths in our model. To mechanistically link ourfunction for deaths to infected cases, we use a previously estimated COVID-\n19 infection-fatality-ratio ifr (probability of death given infection)9 together with a distribution oftimes\nfrom infection to death TE. The ifr is derived from estimates presented in Verity et al11 which assumed\nhomogeneous attack rates across age-groups. To better match estimates of attack rates by age\ngenerated using more detailed information on country and age-specific mixing patterns, we scale\nthese estimates (the unadjusted ifr, referred to here as ifr’) in the following way as in previous work.4\nLet Ca be the number of infections generated in age-group a, Na the underlying size of the population\nin that age group and AR“ 2 Ca/Na the age-group-specific attack rate. The adjusted ifr is then given\n\nby: ifra = fififié, where AR50_59 is the predicted attack-rate in the 50-59 year age-group after\n\nincorporating country-specific patterns of contact and mixing. This age-group was chosen as the\nreference as it had the lowest predicted level of underreporting in previous analyses of data from the\nChinese epidemic“. We obtained country-specific estimates of attack rate by age, AR“, for the 11\nEuropean countries in our analysis from a previous study which incorporates information on contact\nbetween individuals of different ages in countries across Europe.12 We then obtained overall ifr\nestimates for each country adjusting for both demography and age-specific attack rates. Using estimated epidemiological information from previous studies,“'11 we assume TE to be the sum of\ntwo independent random times: the incubation period (infection to onset of symptoms or infection-\nto-onset) distribution and the time between onset of symptoms and death (onset-to-death). The\ninfection-to-onset distribution is Gamma distributed with mean 5.1 days and coefficient of variation\n0.86. The onset-to-death distribution is also Gamma distributed with a mean of 18.8 days and a\ncoefficient of va riation 0.45. ifrm is population averaged over the age structure of a given country. The\ninfection-to-death distribution is therefore given by:\n\num ~ ifrm ~ (Gamma(5.1,0.86) + Gamma(18.8,0.45))\n\nFigure 6 shows the infection-to-death distribution and the resulting survival function that integrates\nto the infection fatality ratio. Figure 6: Left, infection-to-death distribution (mean 23.9 days). Right, survival probability of infected\nindividuals per day given the infection fatality ratio (1%) and the infection-to-death distribution on\nthe left. Using the probability of death distribution, the expected number of deaths dam, on a given day t, for\ncountry, m, is given by the following discrete sum:\n\n\nThe number of deaths today is the sum of the past infections weighted by their probability of death,\nwhere the probability of death depends on the number of days since infection. 8.2 Infection model\n\nThe true number of infected individuals, C, is modelled using a discrete renewal process. This approach\nhas been used in numerous previous studies13'16 and has a strong theoretical basis in stochastic\nindividual-based counting processes such as Hawkes process and the Bellman-Harris process.”18 The\nrenewal model is related to the Susceptible-Infected-Recovered model, except the renewal is not\nexpressed in differential form. To model the number ofinfections over time we need to specify a serial\ninterval distribution g with density g(T), (the time between when a person gets infected and when\nthey subsequently infect another other people), which we choose to be Gamma distributed:\n\ng ~ Gamma (6.50.62). The serial interval distribution is shown below in Figure 7 and is assumed to be the same for all\ncountries. Figure 7: Serial interval distribution g with a mean of 6.5 days. Given the serial interval distribution, the number of infections Eamon a given day t, and country, m,\nis given by the following discrete convolution function:\n\n_ t—1\nCam — Ram ZT=0 Cr,mgt—‘r r\nwhere, similarto the probability ofdeath function, the daily serial interval is discretized by\n\nfs+0.5\n\n1.5\ngs = T=s—0.Sg(T)dT fors = 2,3, and 91 = fT=Og(T)dT. Infections today depend on the number of infections in the previous days, weighted by the discretized\nserial interval distribution. This weighting is then scaled by the country-specific time-Varying\nreproduction number, Ram, that models the average number of secondary infections at a given time. The functional form for the time-Varying reproduction number was chosen to be as simple as possible\nto minimize the impact of strong prior assumptions: we use a piecewise constant function that scales\nRam from a baseline prior R0,m and is driven by known major non-pharmaceutical interventions\noccurring in different countries and times. We included 6 interventions, one of which is constructed\nfrom the other 5 interventions, which are timings of school and university closures (k=l), self—isolating\nif ill (k=2), banning of public events (k=3), any government intervention in place (k=4), implementing\na partial or complete lockdown (k=5) and encouraging social distancing and isolation (k=6). We denote\nthe indicator variable for intervention k E 1,2,3,4,5,6 by IkI’m, which is 1 if intervention k is in place\nin country m at time t and 0 otherwise. The covariate ”any government intervention” (k=4) indicates\nif any of the other 5 interventions are in effect,i.e.14’t’m equals 1 at time t if any of the interventions\nk E 1,2,3,4,5 are in effect in country m at time t and equals 0 otherwise. Covariate 4 has the\ninterpretation of indicating the onset of major government intervention. The effect of each\nintervention is assumed to be multiplicative. Ram is therefore a function ofthe intervention indicators\nIk’t’m in place at time t in country m:\n\nRam : R0,m eXp(— 212:1 O(Rheum)-\n\nThe exponential form was used to ensure positivity of the reproduction number, with R0,m\nconstrained to be positive as it appears outside the exponential. The impact of each intervention on\n\nRam is characterised by a set of parameters 0(1, ...,OL6, with independent prior distributions chosen\nto be\n\nock ~ Gamma(. 5,1). The impacts ock are shared between all m countries and therefore they are informed by all available\ndata. The prior distribution for R0 was chosen to be\n\nR0,m ~ Normal(2.4, IKI) with K ~ Normal(0,0.5),\nOnce again, K is the same among all countries to share information. We assume that seeding of new infections begins 30 days before the day after a country has\ncumulatively observed 10 deaths. From this date, we seed our model with 6 sequential days of\ninfections drawn from cl’m,...,66’m~EXponential(T), where T~Exponential(0.03). These seed\ninfections are inferred in our Bayesian posterior distribution. We estimated parameters jointly for all 11 countries in a single hierarchical model. Fitting was done\nin the probabilistic programming language Stan,19 using an adaptive Hamiltonian Monte Carlo (HMC)\nsampler. We ran 8 chains for 4000 iterations with 2000 iterations of warmup and a thinning factor 4\nto obtain 2000 posterior samples. Posterior convergence was assessed using the Rhat statistic and by\ndiagnosing divergent transitions of the HMC sampler. Prior-posterior calibrations were also performed\n(see below). 8.3 Validation\n\nWe validate accuracy of point estimates of our model using cross-Validation. In our cross-validation\nscheme, we leave out 3 days of known death data (non-cumulative) and fit our model. We forecast\nwhat the model predicts for these three days. We present the individual forecasts for each day, as\nwell as the average forecast for those three days. The cross-validation results are shown in the Figure\n8. Figure 8: Cross-Validation results for 3-day and 3-day aggregatedforecasts\n\nFigure 8 provides strong empirical justification for our model specification and mechanism. Our\naccurate forecast over a three-day time horizon suggests that our fitted estimates for Rt are\nappropriate and plausible. Along with from point estimates we all evaluate our posterior credible intervals using the Rhat\nstatistic. The Rhat statistic measures whether our Markov Chain Monte Carlo (MCMC) chains have\n\nconverged to the equilibrium distribution (the correct posterior distribution). Figure 9 shows the Rhat\nstatistics for all of our parameters\n\n\nFigure 9: Rhat statistics - values close to 1 indicate MCMC convergence. Figure 9 indicates that our MCMC have converged. In fitting we also ensured that the MCMC sampler\nexperienced no divergent transitions - suggesting non pathological posterior topologies. 8.4 SensitivityAnalysis\n\n8.4.1 Forecasting on log-linear scale to assess signal in the data\n\nAs we have highlighted throughout in this report, the lag between deaths and infections means that\nit ta kes time for information to propagate backwa rds from deaths to infections, and ultimately to Rt. A conclusion of this report is the prediction of a slowing of Rt in response to major interventions. To\ngain intuition that this is data driven and not simply a consequence of highly constrained model\nassumptions, we show death forecasts on a log-linear scale. On this scale a line which curves below a\nlinear trend is indicative of slowing in the growth of the epidemic. Figure 10 to Figure 12 show these\nforecasts for Italy, Spain and the UK. They show this slowing down in the daily number of deaths. Our\nmodel suggests that Italy, a country that has the highest death toll of COVID-19, will see a slowing in\nthe increase in daily deaths over the coming week compared to the early stages of the epidemic. We investigated the sensitivity of our estimates of starting and final Rt to our assumed serial interval\ndistribution. For this we considered several scenarios, in which we changed the serial interval\ndistribution mean, from a value of 6.5 days, to have values of 5, 6, 7 and 8 days. In Figure 13, we show our estimates of R0, the starting reproduction number before interventions, for\neach of these scenarios. The relative ordering of the Rt=0 in the countries is consistent in all settings. However, as expected, the scale of Rt=0 is considerably affected by this change — a longer serial\ninterval results in a higher estimated Rt=0. This is because to reach the currently observed size of the\nepidemics, a longer assumed serial interval is compensated by a higher estimated R0. Additionally, in Figure 14, we show our estimates of Rt at the most recent model time point, again for\neach ofthese scenarios. The serial interval mean can influence Rt substantially, however, the posterior\ncredible intervals of Rt are broadly overlapping. Figure 13: Initial reproduction number R0 for different serial interval (SI) distributions (means\nbetween 5 and 8 days). We use 6.5 days in our main analysis. Figure 14: Rt on 28 March 2020 estimated for all countries, with serial interval (SI) distribution means\nbetween 5 and 8 days. We use 6.5 days in our main analysis. 8.4.3 Uninformative prior sensitivity on or\n\nWe ran our model using implausible uninformative prior distributions on the intervention effects,\nallowing the effect of an intervention to increase or decrease Rt. To avoid collinearity, we ran 6\nseparate models, with effects summarized below (compare with the main analysis in Figure 4). In this\nseries of univariate analyses, we find (Figure 15) that all effects on their own serve to decrease Rt. This gives us confidence that our choice of prior distribution is not driving the effects we see in the\nmain analysis. Lockdown has a very large effect, most likely due to the fact that it occurs after other\ninterventions in our dataset. The relatively large effect sizes for the other interventions are most likely\ndue to the coincidence of the interventions in time, such that one intervention is a proxy for a few\nothers. Figure 15: Effects of different interventions when used as the only covariate in the model. 8.4.4\n\nTo assess prior assumptions on our piecewise constant functional form for Rt we test using a\nnonparametric function with a Gaussian process prior distribution. We fit a model with a Gaussian\nprocess prior distribution to data from Italy where there is the largest signal in death data.\n\nQuestion: What is an infection-to-onset-distribution?",
"role": "user"
},
{
"content": "time from infection to the onset\nof symptoms",
"role": "assistant"
}
] |
science.chia_ner
|
science.chia_ner.549
|
[
{
"content": "You will be shown inclusion or exclusion criteria for a clinical trial. Given these criteria, your task is to extract all unique entities of the following types: [\"Observation\", \"Condition\", \"Procedure\", \"Drug\", \"Temporal\", \"Multiplier\", \"Qualifier\", \"Negation\", \"ReferencePoint\", \"Scope\", \"Value\", \"Mood\", \"Device\", \"Visit\", \"Measurement\", \"Person\"].\n\nPlease return the output as a JSON object of the format: {\"Condition\" : [\"breast cancer\", ...], \"Device\" : [\"ports\", ...], \"Drug\" : [\"Atenolol\", ...], \"Measurement\" : [\"blood pressure\", ...], \"Mood\" : [\"suspected\", ...], \"Multiplier\" : [\"more than two\", ...], \"Negation\" : [\"no\", ...], \"Observation\" : [\"history\", ...], \"Person\" : [\"male\", ...], \"Procedure\" : [\"surgery\", ...], \"Qualifier\" : [\"severe\", ...], \"ReferencePoint\" : [\"enrollment\", ...], \"Scope\" : [\"infection\", ...], \"Temporal\" : [\"one day\", ...], \"Value\" : [\"50%\", ...], \"Visit\" : [\"ICU\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nPatients with heart transplantation\nPatient with coronary artery disease\nAge between 18 and 80 years",
"role": "user"
},
{
"content": "{\"Condition\": [\"coronary artery disease\"], \"Device\": [], \"Drug\": [], \"Measurement\": [], \"Mood\": [], \"Multiplier\": [], \"Negation\": [], \"Observation\": [], \"Person\": [\"Age\"], \"Procedure\": [\"heart transplantation\"], \"Qualifier\": [], \"ReferencePoint\": [], \"Scope\": [], \"Temporal\": [], \"Value\": [\"between 18 and 80 years\"], \"Visit\": []}",
"role": "assistant"
}
] |
science.mslr2022_cochrane_multidoc_summarization
|
science.mslr2022_cochrane_multidoc_summarization.2939
|
[
{
"content": "You will be provided with the titles and abstracts of a collection of medical RCT reports studying a clinical question -- for instance, the efficacy of an intervention on some medical outcome. Please write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article. Your response should not include any other text. The RCT reports are below.\n\n\n\nTitle 1:\nMicrodose intravaginal levonorgestrel contraception: a multicentre clinical trial. III. The relationship between pregnancy rate and body weight. World Health Organization. Task Force on Long-Acting Systemic Agents for Fertility Regulation.\nAbstract 1:\nThis paper investigates the relationship between pregnancy rates and body weight of 1005 women using a vaginal ring releasing 20 micrograms levonorgestrel per 24 hours. While the overall pregnancy rate at one year was 3.7%, it was found that women have an increasing risk of pregnancy with increasing body weight. For example, a woman of 40 kg has an estimated pregnancy rate of 1.7% in contrast to 9.8% for a woman of 80 kg weight.\n\n\nTitle 2:\nContraceptive efficacy and cycle control with the Ortho Evra/Evra transdermal system: the analysis of pooled data.\nAbstract 2:\nTo present efficacy and cycle control data pooled from three pivotal studies of the contraceptive patch (Ortho Evra/Evra).\n Three multicenter, open-label, contraceptive studies that included up to 13 treatment cycles.Setting: 183 centers.\n 3,319 women.Intervention(s): Three consecutive 7-day patches (21 days) with 1 patch-free week per cycle.\n Contraceptive efficacy and cycle control.\n Overall and method failure life-table estimates of contraceptive failure through 13 cycles were 0.8% (95% CI, 0.3%-1.3%) and 0.6% (95% CI, 0.2%-0.9%), respectively. Corresponding Pearl indices were 0.88 (95% CI, 0.44-1.33) and 0.7 (95% CI, 0.31-1.10). Contraceptive failure among women with a body weight < 90 kg (<198 lb) was low and uniformly distributed across the weight range. A subgroup of women with body weight > or = 90 kg (> or = 198 lb) may have increased risk of pregnancy. The incidence of breakthrough bleeding was low and decreased over time.\n In contraceptive patch users, the overall annual probability of pregnancy was 0.8% and the method failure probability was 0.6%. The efficacy of the patch was high and similar across age and racial groups. Among women < 90 kg (<198 lb), contraceptive failure was low and uniformly distributed across the range of body weights. In women > or = 90 kg (> or = 198 lb), contraceptive failures may be increased. Efficacy and cycle control have been shown to be comparable to an established oral contraceptive.\n\n\nTitle 3:\nContraceptive failure rates of etonogestrel subdermal implants in overweight and obese women.\nAbstract 3:\nTo estimate the contraceptive failure rates of the etonogestrel subdermal contraceptive implant in overweight and obese women and compare failure rates with women of normal weight and women using intrauterine devices (IUDs).\n The Contraceptive CHOICE Project is a large prospective cohort study designed to promote the use of long-acting reversible contraceptive methods to reduce unintended pregnancies in the St Louis region. Participants are provided reversible contraception of their choice at no cost. We collected baseline height and weight of each participant. During each survey, participants were asked about missed menses and possible pregnancies. Any participant who suspected a pregnancy was asked to come in for urine pregnancy testing. Analysis includes the first 8,445 participants enrolled in CHOICE of which 1,168 chose the implant and 4,200 chose the IUD. Student's t test, χ test, and Kaplan-Meier survival curves were used to perform statistical analyses to estimate failure rates in overweight and obese women using the implant and IUDs.\n Of the women choosing the implant, 28% were overweight and 35% were obese. Of the women who chose an IUD, 27% were overweight and 35% were obese. The 3-year cumulative failure rates for implant and IUD users were less than one per 100 women-years and did not vary by body mass index.\n We found no decrease in the effectiveness of the implant in overweight or obese women. The implant may be offered as a first-line contraceptive method to any woman seeking a reversible and reliable birth control method.\n\n\nTitle 4:\nLevonorgestrel concentrations during use of levonorgestrel rod (LNG ROD) implants.\nAbstract 4:\nIn a three-year randomized trial that included 398 women, blood samples were collected for the purpose of assaying levonorgestrel concentrations in women using a new two-rod contraceptive implant system or an earlier implant formulation, Norplant-2 implants. Sample collection was at 1, 3, 6, 9, and 12 months after placement and semiannually thereafter through three years. Resulting assays and analyses showed that levonorgestrel concentrations of each implant formulation decreased significantly with time after placement, with increasing body weight, and with ponderal index. In the third year, several measures indicated that concentrations of the contraceptive drug were higher in women using the LNG ROD implants than in users of the original formulation. No pregnancies occurred among women in either group in the three years. This study provides evidence that the minimum levonorgestrel concentration needed to protect against pregnancy is below 200 pg/ml, and possibly is below 175 pg/ml.\n\n\nTitle 5:\nAssociation between efficacy and body weight or body mass index for two low-dose oral contraceptives.\nAbstract 5:\nThis analysis investigated the association of oral contraceptive efficacy with body weight and body mass index (BMI) for hypothesis-generating purposes.\n Data were from a randomized, parallel-group trial of 180/215/250 mcg of norgestimate (NGM)/25 mcg of ethinyl estradiol (EE) (given to 1671 women) and 1 mg of norethindrone acetate (NETA)/20 mcg of EE (given to 1139 women). Pregnancies were evaluated across BMI deciles and by BMI and body weight dichotomies. A Pearl index was calculated for each treatment group. The relative risk (RR) of pregnancy was calculated with a Cox proportional hazards model.\n The Pearl index for women who received NGM/EE was 2.36 [95% confidence interval (CI)=1.33-3.40]; for those who received NETA/EE, the Pearl index was 3.29 (95% CI=1.81-4.77). Consistent, weak positive associations between weight and pregnancy risk were found. Overall, for women with a BMI >or=25 kg/m(2) (compared with women with a BMI <25 kg/m(2)), the RR of pregnancy was 1.84 (95% CI=0.98-3.45); that for women who received NGM/EE was 1.39 (95% CI=0.57-3.40), whereas that for women who received NETA/EE was 2.49 (95% CI=1.01-6.13). For women with a body weight >or=70 kg (compared with women with a body weight <70 kg), the RR was 1.25 (95% CI=0.63-2.46); that for women who received NGM/EE was 1.41 (95% CI=0.56-3.54), whereas that for women who received NETA/EE was 1.12 (95% CI=0.40-3.12).\n Women in the higher body weight or BMI category showed a small increase in the risk of pregnancy with these oral contraceptives, but this increase was not statistically significant overall or for either formulation studied.\n\n\nTitle 6:\nEffectiveness of Norplant implants through seven years: a large-scale study in China.\nAbstract 6:\nThe effectiveness of Norplant implants over a seven year period of continuous use was studied in a multicenter trial. Pregnancy rates were 0.4 per 100 in both year six and year seven. More than 3,600 women completed 6 years and more than 2,400 women completed 7 years. Pregnancy rates increased with weight (p < .05) and decreased with age, but in years 6 and 7 combined, the pregnancy rate neither reached nor exceeded 1 per 100 woman years in any 5 year age group or in any 10 kg weight group.\n\n\nTitle 7:\nPre-introductory clinical trials of Norplant implants: a comparison of seventeen countries' experience.\nAbstract 7:\nThis report summarizes the data collected in pre-introductory Norplant implants clinical trials in 17 countries in Latin America, Asia and Africa that were coordinated by either the Population Council or Family Health International (FHI) between 1984 and 1991. A total of 16,282 women between the ages of 18 and 40 years participated in the studies with semi-annual or annual follow-up visits for up to 5 years. Gross cumulative pregnancy rates were < 0.6 per 100 women in the first year and < 1.5 in the second year in all countries. Significant differences in cumulative 5-year pregnancy rates were observed between weight groups 40-49 and 50-59 kg and between 50-59 and 60-69 kg body weight but not between 60-69 kg and those > or = 70 kg. Total cumulative discontinuation rates after five years of Norplant implants use ranged from 35.8 to 60.0 per 100 women. Younger age and low parity were associated with a higher discontinuation rate. Cumulative discontinuation rates for menstrual reasons more than doubled between the end of the first year and second year of use in 13 of 17 countries. This analysis and one previous review provide the only comparison of Norplant contraceptive system study results across a wide diversity of countries; thus allowing an appreciation of the range of clinical experience with Norplant implants and the regional differences in that experience.\n\n\nTitle 8:\nContraceptive efficacy and safety of DMPA-SC.\nAbstract 8:\nDMPA-SC 104 mg/0.65 mL is a new, low-dose subcutaneous (SC) formulation of Depo-Provera contraceptive injection (150 mg/mL medroxyprogesterone acetate injectable suspension) that provides efficacy, safety and immediacy of onset equivalent to Depo-Provera intramuscular (IM) injection. Two large, open-label, Phase 3 studies assessed the 1-year contraceptive efficacy, safety and patient satisfaction with DMPA-SC administered every 3 months (12-13 weeks). Zero pregnancies were reported in both studies, which included a total of 16,023 woman-cycles of exposure to DMPA-SC and substantial numbers of overweight or obese women. DMPA-SC was well-tolerated and adverse events were similar to those reported previously with Depo-Provera IM. Thus, DMPA-SC offers women a new, highly effective and convenient long-acting contraceptive option.\n Copyright 2004 Elsevier Inc.\n\n\nTitle 9:\nBody weight does not impact pregnancy rates during use of a low-dose extended-regimen 91-day oral contraceptive.\nAbstract 9:\nThis study evaluated the impact of weight on efficacy during use of an extended oral contraceptive (OC).\n Data were from a Phase 3 clinical trial evaluating the efficacy of a low-dose 91-day extended regimen of 100 mcg levonorgestrel/20 mcg ethinyl estradiol (LNG/EE; 84 days)+10 mcg EE (7 days) for the prevention of pregnancy. Crude pregnancy rates were calculated for weight and body mass index (BMI) deciles.\n Of the 1736 women in this analysis, 878 (50.6%) had a BMI greater than 25 kg/m2, and 770 (44.4%) were heavier than 70 kg. Pregnancies occurred in 36 women. Crude pregnancy rates were similar across weight and BMI deciles, with no discernable differences observed between deciles using either classification criterion.\n No evidence of any reduction in the level of contraceptive efficacy was observed with this low-dose extended OC regimen in overweight and obese women.\n Copyright © 2012. Published by Elsevier Inc.",
"role": "user"
},
{
"content": "The evidence did not generally show an association of BMI with effectiveness of hormonal contraceptives. However, the evidence was limited for any individual contraceptive method. Studies using BMI (rather than weight alone) can provide more information about whether body composition is related to contraceptive effectiveness. The efficacy of subdermal implants and injectable contraceptives may be unaffected by body mass. The contraceptive methods examined here are among the most effective when the recommended regimen is followed.\nThe overall quality of evidence was low for this review. More recent reports provided moderate quality evidence, while the older studies provided evidence of low or very low quality for our purposes. Investigators should consider adjusting for potential confounding related to BMI. Trials should be designed to include sufficient numbers of overweight or obese women to adequately examine effectiveness and side effects of hormonal contraceptives within those groups.",
"role": "assistant"
}
] |
science.bioasq_yesno_qa
|
science.bioasq_yesno_qa.1062
|
[
{
"content": "You have to answer a biomedical question in binary format, i.e. only yes and no are the only acceptable answer formats. A list of paragraphs is provided as context to help you answer the question.\n\nQuestion: Does IL18 signaling have a role in thymus?\n\nContext: IL18 signaling promotes homing of mature Tregs into the thymus.\nCollectively, this study provides a detailed characterization of the mature Treg subsets in the mouse thymus and identifies a key role of IL18 signaling in controlling the CCR6-CCL20-dependent migration of Tregs into the thymus.\ner, we show that IL18R+ Tregs are endowed with higher capacity to populate the thymus than their IL18R- or IL18R-/- counterparts, highlighting the key role of IL18R in this process\nIL18 signaling promotes homing of mature Tregs into the thymus\ninally, we demonstrate that IL18 signaling is critical for the induction of the key thymus-homing chemokine receptor - CCR6 on Tregs.\nMoreover, we show that IL18R+ Tregs are endowed with higher capacity to populate the thymus than their IL18R- or IL18R-/- counterparts, highlighting the key role of IL18R in this process.",
"role": "user"
},
{
"content": "yes",
"role": "assistant"
}
] |
science.data_reco_mcq_mc
|
science.data_reco_mcq_mc.1055
|
[
{
"content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: A novel end-to-end trainable deep learning model for unsupervised visual tracking.\n\nKeyphrases: unsupervised visual tracking images\n\nDataset description:\nWikiReading: WikiReading is a large-scale natural language understanding task and publicly-available dataset with 18 million instances. The task is to predict textual values from the structured knowledge base Wikidata by reading the text of the corresponding Wikipedia articles.\nDDD20: The dataset was captured with a DAVIS camera that concurrently streams both dynamic vision sensor (DVS) brightness change events and active pixel sensor (APS) intensity frames. DDD20 is the longest event camera end-to-end driving dataset to date with 51h of DAVIS event+frame camera and vehicle human control data collected from 4000km of highway and urban driving under a variety of lighting conditions.\nVOT2016: VOT2016 is a video dataset for visual object tracking. It contains 60 video clips and 21,646 corresponding ground truth maps with pixel-wise annotation of salient objects.\nE2E: End-to-End NLG Challenge (E2E) aims to assess whether recent end-to-end NLG systems can generate more complex output by learning from datasets containing higher lexical richness, syntactic complexity and diverse discourse phenomena.\nEyeCar: EyeCar is a dataset of driving videos of vehicles involved in rear-end collisions paired with eye fixation data captured from human subjects. It contains 21 front-view videos that were captured in various traffic, weather, and day light conditions.\nVOT2017: VOT2017 is a Visual Object Tracking dataset for different tasks that contains 60 short sequences annotated with 6 different attributes.\n\nOptions:- A: WikiReading B: DDD20 C: VOT2016 D: E2E E: EyeCar F: VOT2017",
"role": "user"
},
{
"content": "[\"VOT2016\", \"VOT2017\"]",
"role": "assistant"
}
] |
science.mslr2022_cochrane_multidoc_summarization
|
science.mslr2022_cochrane_multidoc_summarization.3283
|
[
{
"content": "You will be provided with the titles and abstracts of a collection of medical RCT reports studying a clinical question -- for instance, the efficacy of an intervention on some medical outcome. Please write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article. Your response should not include any other text. The RCT reports are below.\n\n\n\nTitle 1:\nSulpiride augmentation in people with schizophrenia partially responsive to clozapine. A double-blind, placebo-controlled study.\nAbstract 1:\nWe hypothesised that a combined regimen of clozapine, a relatively weak D2-dopaminergic antagonist, and sulpiride, a selective D2 blocker, would demonstrate a greater antipsychotic efficacy by enhancing the D2 blockade of clozapine.\n Twenty-eight people with schizophrenia, previously unresponsive to typical antipsychotics and only partially responsive to current treatment with clozapine, received, double-blind, 600 mg/day sulpiride or placebo, in addition to an ongoing clozapine treatment. The clinical status was evaluated before, during, and at the end of 10 weeks of sulpiride addition using the Brief Psychiatric Rating Scale (BPRS), Scale for the Assessment of Positive Symptoms (SAPS), Scale for the Assessment of Negative Symptoms, and Hamilton Rating Scale for Depression.\n The clozapine-sulpiride group exhibited substantially greater and significant improvements in positive and negative psychotic symptoms. About half of them, characterised by a younger age and lower baseline SAPS scores, had a mean reduction of 42.4 and 50.4% in their BPRS and SAPS scores, respectively.\n A subgroup of patients with chronic schizophrenia may substantially benefit from sulpiride addition to clozapine.",
"role": "user"
},
{
"content": "Sulpiride plus clozapine is probably more effective than clozapine alone in producing clinical improvement in some people whose illness has been resistant to other antipsychotic drugs including clozapine. However, much more robust data are needed.",
"role": "assistant"
}
] |
science.pubmedqa_qa
|
science.pubmedqa_qa.66
|
[
{
"content": "You will be shown an abstract from a research paper, and a corresponding research question. Determine the answer to the research question based on the context provided. Your answer should be one of: \"YES\", \"NO\", or \"MAYBE\".\n\nContext:\n(BACKGROUND) The range of injury severity that can be seen within the category of type II supracondylar humerus fractures (SCHFs) raises the question whether some could be treated nonoperatively. However, the clinical difficulty in using this approach lies in determining which type II SCHFs can be managed successfully without a surgical intervention.\n(METHODS) We reviewed clinical and radiographic information on 259 pediatric type II SCHFs that were enrolled in a prospective registry of elbow fractures. The characteristics of the patients who were treated without surgery were compared with those of patients who were treated surgically. Treatment outcomes, as assessed by the final clinical and radiographic alignment, range of motion of the elbow, and complications, were compared between the groups to define clinical and radiographic features that related to success or failure of nonoperative management.\n(RESULTS) During the course of treatment, 39 fractures were found to have unsatisfactory alignment with nonoperative management and were taken for surgery. Ultimately, 150 fractures (57.9%) were treated nonoperatively, and 109 fractures (42.1%) were treated surgically. At final follow-up, outcome measures of change in carrying angle, range of motion, and complications did not show clinically significant differences between treatment groups. Fractures without rotational deformity or coronal angulation and with a shaft-condylar angle of>15 degrees were more likely to be associated with successful nonsurgical treatment. A scoring system was developed using these features to stratify the severity of the injury. Patients with isolated extension deformity, but none of the other features, were more likely to complete successful nonoperative management.\n\nQuestion:\nType II supracondylar humerus fractures: can some be treated nonoperatively?\n\nAnswer:",
"role": "user"
},
{
"content": "YES",
"role": "assistant"
}
] |
science.bioasq_yesno_qa
|
science.bioasq_yesno_qa.909
|
[
{
"content": "You have to answer a biomedical question in binary format, i.e. only yes and no are the only acceptable answer formats. A list of paragraphs is provided as context to help you answer the question.\n\nQuestion: Is there a vaccine for peanut allergy?\n\nContext: Currently, two forms of peanut immunotherapy, oral immunotherapy (OIT) and epicutaneous immunotherapy (EPIT), are in Phase III clinical trials and have shown promise to induce desensitization in many subjects\nThis article presents an overview of potential treatments of food allergy, with an emphasis on various forms of immunotherapy (including oral immunotherapy, sublingual immunotherapy, epicutaneous immunotherapy, immunotherapy with modified food antigens, and immunotherapy with a recombinant peanut vaccine).\nRecent advances in immunotherapy and vaccine development for peanut allergy.\nEfforts have been made to develop a vaccine for peanut allergy.\nSo far, results, especially from oral immunotherapy studies, have shown good efficacy in achieving desensitization to peanut with a good safety profile.",
"role": "user"
},
{
"content": "yes",
"role": "assistant"
}
] |
science.anat_em_ner
|
science.anat_em_ner.548
|
[
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"AnatomicalEntity\"].\n\nPlease return the output as a JSON object of the format: {\"AnatomicalEntity\": [\"tumor cells\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nModulation of apoptosis in human myeloid leukemic cells by GM-CSF. \nApoptosis (programmed cell death) regulates cell population size. To determine the mechanisms whereby hematopoietic growth factors (HGFs) modulate apoptosis in human myeloid leukemic cells, we evaluated the roles of protein and mRNA synthesis for altering apoptosis in growth factor-stimulated vs. quiescent leukemic TF1 cells. Lysates of cells from the granulocyte-macrophage colony-stimulating factor (GM-CSF)-dependent myeloid leukemic cell line TF1 were separated into high molecular weight (HMW) pellets of intact DNA and supernatants of fragmented low MW (LMW) DNA, and the DNA purified from these fractions was quantified. In the absence of both GM-CSF and fetal bovine serum (FBS), 70% of the DNA was fragmented after 3 days in culture, with a characteristic apoptotic ladder-like pattern on agarose gel electrophoresis, whereas this proportion had initially been < 5%. In contrast, less than 5% of the DNA was fragmented in cells incubated with GM-CSF plus FBS or GM-CSF alone. Delayed addition of GM-CSF, but not FBS, permitted partial rescue of the cells, inhibiting increasing rates of accumulation of fragmented DNA. When the macro-molecular synthesis inhibitor cycloheximide (CHX) or actinomycin D (Act D) was present for 26 hours in the absence of GM-CSF and FBS, apoptosis was inhibited. In contrast, in the presence of GM-CSF or FBS, apoptosis was enhanced upon addition of CHX or Act D. The latter effect persisted even with the late addition of CHX. These findings indicate that disparate mechanisms of enhancing or inhibiting apoptosis exist in myeloid leukemic cells related to environmental conditions, including HGF-regulated cellular synthesis of distinct proteins and mRNA.",
"role": "user"
},
{
"content": "{\"AnatomicalEntity\": [\"myeloid leukemic cells\", \"supernatants\", \"Lysates\", \"myeloid leukemic cell line TF1\", \"FBS\", \"cell\", \"leukemic TF1 cells\", \"cells\", \"fetal bovine serum\", \"cellular\"]}",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization
|
science.multixscience_multidoc_summarization.28445
|
[
{
"content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nFor realistic scale-free networks, we investigate the traffic properties of stochastic routing inspired by a zero-range process known in statistical physics. By parameters @math and @math , this model controls degree-dependent hopping of packets and forwarding of packets with higher performance at more busy nodes. Through a theoretical analysis and numerical simulations, we derive the condition for the concentration of packets at a few hubs. In particular, we show that the optimal @math and @math are involved in the trade-off between a detour path for @math ; In the low-performance regime at a small @math , the wandering path for @math and @math is small, neither the wandering long path with short wait trapped at nodes ( @math ), nor the short hopping path with long wait trapped at hubs ( @math ) is advisable. A uniformly random walk ( @math ) yields slightly better performance. We also discuss the congestion phenomena in a more complicated situation with packet generation at each time step.\n\nCited Abstract(s):\n[1]: We numerically investigate jamming transitions in complex heterogeneous networks. Inspired by Internet routing protocols, we study a general model that incorporates local traffic information through a tunable parameter. The results show that whether the transition from a low-traffic regime to a congested phase is of first- or second-order type is determined by the protocol at work. The microscopic dynamics reveals that these two radically different behaviors are due to the way in which traffic jams propagate through the network. Our results are discussed in the context of Internet dynamics and other transport processes that take place on complex networks and provide insights for the design of routing policies based on traffic awareness in communication systems.\n\n[2]: We propose a routing strategy to improve the transportation efficiency on complex networks. Instead of using the routing strategy for shortest path, we give a generalized routing algorithm to find the so-called efficient path, which considers the possible congestion in the nodes along actual paths. Since the nodes with the largest degree are very susceptible to traffic congestion, an effective way to improve traffic and control congestion, as our strategy, can be redistributing traffic load in central nodes to other noncentral nodes. Simulation results indicate that the network capability in processing traffic is improved more than 10 times by optimizing the efficient path, which is in good agreement with the analysis. DOI: 10.1103 PhysRevE.73.046108 PACS numbers: 89.75.Hc Since the seminal work on scale-free networks by Barabasi and Albert BA model1 and on the small-world phenomenon by Watts and Strogatz 2, the structure and dynamics of complex networks have recently attracted a tremendous amount of interest and attention from the physics community see the review papers 3‐5 and references therein. The increasing importance of large communication networks such as the Internet 6, upon which our society survives, calls for the need for high efficiency in handling and delivering information. In this light, to find optimal strategies for traffic routing is one of the important issues we have to address. There have been many previous studies to understand and control traffic congestion on networks, with a basic assumption that the network has a homogeneous structure 7‐11. However, many real networks display both scale-free and small-world features, and thus it is of great interest to study the effect of network topology on traffic flow and the effect of traffic on network evolution. present a formalism that can cope simultaneously with the searching and traffic dynamics in parallel transportation systems 12. This formalism can be used to optimize network structure under a local search algorithm, while to obtain the formalism one should know the global information of the whole networks. Holme and Kim provide an in-depth analysis on the vertex edge overload cascading breakdowns based on evolving networks, and suggest a method to avoid\n\n[3]: networks. The strategy is governed by a single parameter. Simulation results show that maximizing the network capacity and reducing the packet travel time can generate an optimal parameter value. Compared with the strategy of adopting exclusive local static information, the new strategy shows its advantages in improving the efficiency of the system. The detailed analysis of the mixing strategy is provided for explaining its effects on traffic routing. The work indicates that effectively utilizing the larger degree nodes plays a key role in scalefree traffic systems.\n\n[4]: We define a minimal model of traffic flows in complex networks in order to study the trade-off between topological-based and traffic-based routing strategies. The resulting collective behavior is obtained analytically for an ensemble of uncorrelated networks and summarized in a rich phase diagram presenting second-order as well as first-order phase transitions between a free-flow phase and a congested phase. We find that traffic control improves global performance, enlarging the free-flow region in parameter space only in heterogeneous networks. Traffic control introduces nonlinear effects and, beyond a critical strength, may trigger the appearance of a congested phase in a discontinuous manner. The model also reproduces the crossover in the scaling of traffic fluctuations empirically observed on the Internet.",
"role": "user"
},
{
"content": "Other routing schemes [3] [1] [4] have also been considered, taking into account lengths of both the routing path and of the queue. In a deterministic model [1] , a forwarding node @math is chosen among neighbors @math by minimizing the quantity @math with a weight @math , @math denoting the distance from @math to the terminal node. Since we must solve the optimization problems, these models [2] [1] are not suitable for wireless or ad hoc communication networks. Thus, stochastic routing methods using only local information are potentially promising. In a stochastic model [4] , @math is chosen at random, and a packet at the top of its queue is sent with probability @math or refused with probability @math as a nondecreasing function of the queue length @math . This model is simplified by the assumption of a constant arrival rate of packets, for analyzing the critical point of traffic congestion in a mean-field equation [4] .",
"role": "assistant"
}
] |
science.mslr2022_cochrane_multidoc_summarization
|
science.mslr2022_cochrane_multidoc_summarization.1009
|
[
{
"content": "You will be provided with the titles and abstracts of a collection of medical RCT reports studying a clinical question -- for instance, the efficacy of an intervention on some medical outcome. Please write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article. Your response should not include any other text. The RCT reports are below.\n\n\n\nTitle 1:\nThe effect of relaxation therapy on preterm labor outcomes.\nAbstract 1:\nTo examine the effect of relaxation on preterm labor outcome.\n Quasi-experimental, with women who experienced preterm labor randomly assigned to a control or experimental group. The experimental group was to do a daily relaxation exercise. A third group was added to the study: women who were originally assigned to the relaxation group but were unable to adhere to the daily practice. Final data were analyzed for three groups: control (n = 40), experimental (n = 44), and nonadherent (n = 23) participants.\n Women were referred to the study from physician offices and a hospital-based obstetric triage clinic in the Northwest.\n Total sample was comprised of 107 women with singleton gestations, documented contractions with cervical change, and intact membranes.\n The experimental group was instructed in a progressive relaxation exercise. The participants were given tapes of the exercise and instructed to do it daily.\n Study outcomes included gestational age at birth, rate of pregnancy prolongation, and birth weight.\n The outcome variables were analyzed using analysis of covariance, with the preterm labor risk score entered as a covariate to compensate statistically for group differences. A positive response to the relaxation intervention was found: The experimental group had significantly longer gestations and larger newborns when compared to the control and nonadherent groups.\n Relaxation therapy made a difference in preterm labor outcome. Women who practiced relaxation had larger newborns, longer gestations, and higher rates of pregnancy prolongation. Given the low cost of the intervention, it should be offered to all women at risk for preterm labor.\n\n\nTitle 2:\nDoes relaxation education in anxious primigravid Iranian women influence adverse pregnancy outcomes?: a randomized controlled trial.\nAbstract 2:\nMaternal anxiety and stress are found to be predictors of adverse pregnancy outcomes, including low birth weight and prematurity.\n The aim of the study was to determine whether relaxation education in anxious pregnant Iranian women in their first pregnancy affects selected pregnancy outcomes, including birth weight, preterm birth, and surgical delivery rate.\n A total of 110 obstetrically and medically low-risk primigravid women in Iran with a high anxiety level demonstrated by Spielberger's State-Trait Anxiety Inventory were randomly assigned into experimental and control groups.\n In this randomized controlled trial, the experimental group received routine prenatal care along with 7-week applied relaxation training sessions, while the control group received only routine prenatal care. Anxiety and perceived stress were measured by pre-educational and post-educational intervention. Data related to pregnancy outcomes include birth weight, gestational age at birth, and type of delivery.\n Significant reductions in low birth weight, cesarean section, and/or instrumental extraction were found in the experimental group compared with the control group. No significant differences were found in the rate of preterm birth.\n The findings suggest beneficial effects of nurse-led relaxation education sessions during the prenatal period. This intervention could serve as a resource for improving pregnancy outcomes in women with high anxiety.\n\n\nTitle 3:\nEffect of integrated yoga on stress and heart rate variability in pregnant women.\nAbstract 3:\nTo study the effect of integrated yoga practice and guided yogic relaxation on both perceived stress and measured autonomic response in healthy pregnant women.\n The 122 healthy women recruited between the 18th and 20th week of pregnancy at prenatal clinics in Bangalore, India, were randomized to practicing yoga and deep relaxation or standard prenatal exercises 1-hour daily. The results for the 45 participants per group who completed the study were evaluated by repeated measures analysis of variance.\n Perceived stress decreased by 31.57% in the yoga group and increased by 6.60% in the control group (P=0.001). During a guided relaxation period in the yoga group, compared with values obtained before a practice session, the high-frequency band of the heart rate variability spectrum (parasympathetic) increased by 64% in the 20th week and by 150% in the 36th week, and both the low-frequency band (sympathetic), and the low-frequency to high-frequency ratio were concomitantly reduced (P<0.001 between the 2 groups). Moreover, the low-frequency band remained decreased after deep relaxation in the 36th week in the yoga group.\n Yoga reduces perceived stress and improves adaptive autonomic response to stress in healthy pregnant women.\n\n\nTitle 4:\nMassage therapy effects on depressed pregnant women.\nAbstract 4:\nEighty-four depressed pregnant women were recruited during the second trimester of pregnancy and randomly assigned to a massage therapy group, a progressive muscle relaxation group or a control group that received standard prenatal care alone. These groups were compared to each other and to a non-depressed group at the end of pregnancy. The massage therapy group participants received two 20 min therapy sessions by their significant others each week for 16 weeks of pregnancy, starting during the second trimester. The relaxation group provided themselves with progressive muscle relaxation sessions on the same time schedule. Immediately after the massage therapy sessions on the first and last days of the 16-week period the women reported lower levels of anxiety and depressed mood and less leg and back pain. By the end of the study the massage group had higher dopamine and serotonin levels and lower levels of cortisol and norepinephrine. These changes may have contributed to the reduced fetal activity and the better neonatal outcome for the massage group (i.e. lesser incidence of prematurity and low birthweight), as well as their better performance on the Brazelton Neonatal Behavior Assessment. The data suggest that depressed pregnant women and their offspring can benefit from massage therapy.\n\n\nTitle 5:\nMusic therapy to relieve anxiety in pregnant women on bedrest: a randomized, controlled trial.\nAbstract 5:\nTo explore the effect of music therapy on anxiety alleviation for antepartal women on bedrest in China.\n One hundred and twenty patients recruited from one tertiary hospital in Changsha city, China were enrolled in a randomized controlled trial. Women in the experimental group received music therapy for 30 minutes on 3 consecutive days. Usual care participants had a 30-minute rest on 3 consecutive days. Variables included anxiety (State-Trait Anxiety Inventory), and physiological responses (vital signs, fetal heart rate). Descriptive statistics, t tests, chi tests, Wilcoxon rank sum tests, and Pearson correlation analyses were used to analyze the data.\n Anxiety levels decreased and physiological responses improved significantly in the intervention group, which was provided with music therapy while on bedrest.\n Carefully selected music that incorporates a patient's own preferences may offer an inexpensive and effective method to reduce anxiety for antepartal women with high risk pregnancies who are on bedrest.\n\n\nTitle 6:\nPsychological factors in preterm labor and psychotherapeutic intervention.\nAbstract 6:\nnan\n\n\nTitle 7:\nThe effect of participating in the labor preparation classes on maternal vitality and positive affect during the pregnancy and after the labor.\nAbstract 7:\nPregnant women predispose to physical and emotional diseases. Vitality and positive affect are against the depression and low energetic mood Exercise and physical activities in pregnancy have short and long term consequences for mothers and their growing fetus and affected in mood regulation. Respiratory and relaxation skills could decrease stress, pain and use of analgesics during labor.\n In this study, 117 primigravida and multigravida women (59 women in case and 58 in control groups) were enrolled. Women in case group participated in delivery preparation classes since 20 weeks of pregnancy for 8 sessions. The control group was just received routine pregnancy care. The education was about pregnancy and delivery, physical exercises and relaxation skills. Questionnaires of vitality and positive affect toward the labor were completed three times: before intervention, after the 8(th) session and after delivery to 2 weeks later by an interview. Data analyzed using SPSS16 software and repeated measurement.\n There was a significant difference in vitality and positive affect regarding type of delivery before and after intervention and after delivery in case group(p < 0.0001).There was no significant difference in vitality and positive affect scores after delivery and after intervention (p < 0.083, p < 0.545). There was significant difference in vitality and positive affect scores regarding between case and control groups after the intervention and after the delivery (p < 0.001, p < 0.0001).\n Considering the results of this study, it could be recommended that all pregnant women should contribute in delivery preparation classes to improve their mood, confidence, vitality toward labor.",
"role": "user"
},
{
"content": "According to the results of this review, there is some evidence that relaxation during pregnancy reduces stress and anxiety. However, there was no effect on PTL/PTB. These results should be interpreted with caution as they were drawn from included studies with limited quality.",
"role": "assistant"
}
] |
science.scitldr_aic
|
science.scitldr_aic.1816
|
[
{
"content": "Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.\n\nPaper text:\n\nIn this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al., 2016) and Categorical DQN (Bellemare et al., 2017), while giving better run-time performance than A3C (Mnih et al., 2016).\nOur first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting.\nThe same approach can be used to convert several classes of multi-step policy evaluation algorithms designed for expected value evaluation into distributional ones.\nNext, we introduce the β-leaveone-out policy gradient algorithm which improves the trade-off between variance and bias by using action values as a baseline.\nOur final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization.\nUsing the Atari 2600 benchmarks, we show that each of these innovations contribute to both the sample efficiency and final agent performance.\nFinally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training.\nModel-free deep reinforcement learning has achieved several remarkable successes in domains ranging from super-human-level control in video games (Mnih et al., 2015) and the game of Go BID10 , to continuous motor control tasks (Lillicrap et al., 2015; Schulman et al., 2015) .Much\nof the recent work can be divided into two categories. First\n, those of which that, often building on the DQN framework, act -greedily according to an action-value function and train using minibatches of transitions sampled from an experience replay buffer BID10 BID13 BID5 BID0 . These\nvalue-function agents benefit from improved sample complexity, but tend to suffer from long runtimes (e.g. DQN requires approximately a week to train on Atari). The second\ncategory are the actor-critic agents, which includes the asynchronous advantage actor-critic (A3C) algorithm, introduced by Mnih et al. (2016) . These agents\ntrain on transitions collected by multiple actors running, and often training, in parallel (Schulman et al., 2017; BID12 . The deep actor-critic\nagents train on each trajectory only once, and thus tend to have worse sample complexity. However, their distributed\nnature allows significantly faster training in terms of wall-clock time. Still, not all existing algorithms\ncan be put in the above two categories and various hybrid approaches do exist BID17 O'Donoghue et al., 2017; BID4 BID14 .\nIn this work we presented a new off-policy agent based on Retrace actor-critic architecture and show that it achieves similar performance as the current state-of-the-art while giving significant real-time performance gains.\nWe demonstrate the benefits of each of the suggested algorithmic improvements, including Distributional Retrace, beta-LOO policy gradient and contextual priority tree.\nDISPLAYFORM0 Proof.\nThe bias ofĜ β-LOO is DISPLAYFORM1",
"role": "user"
},
{
"content": "Reactor combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN while giving better run-time performance than A3C.",
"role": "assistant"
}
] |
science.scitldr_aic
|
science.scitldr_aic.1241
|
[
{
"content": "Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.\n\nPaper text:\n\nRecent theoretical and experimental results suggest the possibility of using current and near-future quantum hardware in challenging sampling tasks.\nIn this paper, we introduce free-energy-based reinforcement learning (FERL) as an application of quantum hardware.\nWe propose a method for processing a quantum annealer’s measured qubit spin configurations in approximating the free energy of a quantum Boltzmann machine (QBM).\nWe then apply this method to perform reinforcement learning on the grid-world problem using the D-Wave 2000Q quantum annealer.\nThe experimental results show that our technique is a promising method for harnessing the power of quantum sampling in reinforcement learning tasks.\nReinforcement learning (RL) BID33 ; BID6 has been successfully applied in fields such as engineering BID11 ; BID35 , sociology BID12 ; BID30 , and economics BID22 ; BID31 .\nThe training samples in reinforcement learning are provided by the interaction of an agent with an ambient environment.\nFor example, in a motion planning problem in uncharted territory, it is desirable for the agent to learn to correctly navigate in the fastest way possible, making the fewest blind decisions.\nThat is, neither exploration nor exploitation can be pursued exclusively without either facing a penalty or failing at the task.\nOur goal is, therefore, not only to design an algorithm that eventually converges to an optimal policy, but for the algorithm to be able to generate suboptimal policies early in the learning process.\nFree-energy-based reinforcement learning (FERL) using a restricted Boltzmann machine (RBM), as suggested by BID27 , relies on approximating a utility function for the agent, called the Q-function, using the free energy of an RBM.\nRBMs have the advantage that their free energy can be efficiently calculated using closed formulae.\nRBMs can represent any joint distribution over binary variables BID20 ; BID15 ; Le BID19 ; however, this property of universality may require exponentially large RBMs BID20 ; Le BID19 .General\nBoltzmann machines (GBM) are proposed in an effort to devise universal Q-function approximators with polynomially large Boltzmann networks BID10 . Traditionally\n, Monte Carlo simulation is used to perform the computationally expensive tasks of approximating the free energy of GBMs under a Boltzmann distribution. One way to speed\nup the approximation process is to represent a GBM by an equivalent physical system and try to find its Boltzmann distribution. An example of such\na physical system is a quantum annealer consisting of a network of pair-wise interacting quantum bits (qubits). Although quantum annealers\nhave already been used in many areas of computational science, including combinatorial optimization and machine learning, their application in RL has not been explored.In order to use quantum annealing for RL, we first represent the Q-function as the free energy of a physical system, that is, that of a quantum annealer. We then slowly evolve the\nstate of the physical system from a well-known initial state toward a state with a Boltzmann-like probability distribution. Repeating the annealing process\nsufficiently long can provide us with samples from the Boltzmann distribution so that we can empirically approximate the free energy of the physical system under this distribution. Finally, approximating the free\nenergy of the system would give us an estimate of the Q-function. Up until the past few years, studies\nwere limited to the classical Boltzmann machines. 1 Recently, BID10 generalized the classical\nmethod toward a quantum or quantum-inspired algorithm for approximating the free energy of GBMs. Using simulated quantum annealing (SQA) BID10\nshowed that FERL using a deep Boltzmann machine (DBM) can provide a drastic improvement in the early stages of learning, yet performing the same procedure on an actual quantum device remained a difficult task. This is because sampling from a quantum system\nrepresenting a quantum Boltzmann machine is harder than the classical case, since at the end of each anneal the quantum system is in a superposition. Any attempt to measure the final state of the\nquantum system is doomed to fail since the superposition would collapse into a classical state that does not carry the entirety of information about the final state.In this work, we have two main contributions. We first employ a quantum annealer as a physical\ndevice to approximate the free energy of a classical Boltzmann machine. Second, we generalize the notion of classical Boltzmann\nmachines to quantum Boltzmann machines within the field of RL and utilize a quantum annealer to approximate the free energy of a quantum system. In order to deal with the issue of superposition mentioned\nabove, we propose a novel stacking procedure in that we attempt to reconstruct the full state of superposition from the partial information that we get from sampling after each anneal. Finally we report proof-of-concept results using the D-Wave\n2000Q quantum processor to provide experimental evidence for the applicability of a quantum annealer in reinforcement learning as predicted by BID10 .\nWe solve the grid-world problem using various Q-learning methods with the Q-function parametrized by different neural networks.\nFor comparison, we demonstrate the performance of a fully connected deep Q-network method that can be considered state of the art.\nThis method efficiently processes every training sample, but, as shown in Fig. 4 , requires a very large number of training samples to converge to the optimal policy.\nAnother conventional method is free-energy-based RL using an RBM.\nThis method is also very successful at learning the optimal policy at the scale of the RL task considered in our experiment.\nAlthough this method does not outperform other FERL methods that take advantage of a highly efficient sampling oracle, the processing of each training sample is efficient, as it is based on closed formulae.\nIn fact, for the size of problem considered, the RBM-based FERL outperforms the fully connected deep Q-network method.The comparison of results in Fig. 6 suggests that replica stacking is a successful method for estimating effective classical configurations obtained from a quantum annealer, given that the spins can only be measured in measurement bases.\nFor practical use in RL, this method provides a means of treating the quantum annealer as a QBM.\nFERL using the quantum annealer, in conjunction with the replica stacking technique, provides significant improvement over FERL using classical Boltzmann machines.\nThe curve representing SQA-based FERL using a Boltzmann machine on the Chimera graph is almost coincident with the one obtained using the D-Wave 2000Q, whereas the SQA-based FERL using a DBM slightly outperforms it.\nThis suggests that quantum annealing chips with greater connectivity and more control over annealing time can further improve the performance of the replica stacking method applied to RL tasks.\nThis is further supported by comparing the performance of SA-based FERL using a DBM versus SA-based FERL using the Chimera graph.\nThis result shows that DBM is, due to its additional connections, a better choice of neural network compared to the Chimera graph.For practical reasons, we aim to associate an identical choice of virtual parameters β and Γ to all of the TFIMs constructed using FERL.\nBID5 and BID25 provide methods for estimating the effective inverse temperature β for other applications.\nHowever, in both studies, the samples obtained from the quantum annealer are matched to the Boltzmann distribution of a classical Ising model.\nIn fact, the transverse-field strength is a second virtual parameter that we consider.\nThe optimal choice Γ \" 0.5 corresponds to 2{3 of the annealing time, in agreement with the work of BID0 , who also considers TFIM with 16 qubits.The agreement of FERL using quantum annealer reads treated as classical Boltzmann samples with that of FERL using SA and classical Boltzmann machines suggests that, at least for this task and this size of Boltzmann machine, the measurements provided by the D-Wave 2000Q can be considered good approximations of Boltzmann distribution samples of classical Ising models.The extended undirected graphical model developed in this paper using the replica stacking method is not limited to Q-function approximation in RL tasks.\nPotentially, this method can be applied to tasks where Boltzmann machines can be used.\nThis method provides a mechanism for approximating the activations and partition functions of quantum Boltzmann machines that have a significant transverse field.\nIn this paper, we describe a free-energy-based reinforcement learning algorithm using an existing quantum annealer, namely the D-Wave 2000Q.\nOur method relies on the Suzuki-Trotter decomposition and the use of the measured configurations by the D-Wave 2000Q as replicas of an effective classical Ising model of one dimension higher.\nThe results presented here are first-step proofs of concept of a proposed quantum algorithm with a promising path towards outperforming reinforcement learning algorithms devised for digital hardware.\nGiven appropriate advances in quantum annealing hardware, future research can employ the proposed principles to solve larger-scale reinforcement learning tasks in the emerging field of quantum machine learning.",
"role": "user"
},
{
"content": "We train Quantum Boltzmann Machines using a replica stacking method and a quantum annealer to perform a reinforcement learning task.",
"role": "assistant"
}
] |
science.covid_deepset_qa
|
science.covid_deepset_qa.745
|
[
{
"content": "You will be shown a paragraph from a biomedical research paper, followed by a question about the paper. Please extract text from the paragraph to answer the question. Do not include any text in your repsonse other than the answer.\n\nParagraph: Similar constraints apply to veterinary vaccines, with cost often an even more important consideration. Finally, long-lasting cellular and (where appropriate) humoral immune responses to the vectored antigen must be induced following administration of these vaccines, preferably with a single dose (Atkins et al., 2006) . As some of the vectors in use will have been seen by the host immune system prior to vaccination, whether the presence of pre-existing immune responses is detrimental for the further development of a vector-based vaccine scheme, or can augment responses to the vectored antigen, needs to be considered in detail. This is the subject of this review. In discussing the possible effects on pre-existing immunity, the natural immunity to the vector needs to be considered. Therefore, considering a vector such as Salmonella, if a host has previously been infected there will exist robust B and T memory responses, and as such, when a vaccination is delivered, an anamnestic response to the Salmonella antigens will be induced (while the response to the vectored antigen will be a primary response). This will theoretically reduce the exposure of the heterologous antigen to the immune system, as the vector is rapidly cleared. Surprisingly, as will be seen in some of the examples given below, this can have results that differ depending on the magnitude of the response to the vectored antigen. Similarly, for virally vectored antigens, the existence of pre-existing immunity to the vector (particularly neutralizing antibody) will restrict delivery of the virus into cells, thereby effectively reducing the dose of the vectored antigen. Again, this might be expected to result in a reduction in the antigenicity of the vectored antigen. In the case of bacterial vectors, the effect of pre-existing immune responses has only been tested using Salmonella serovars and Listeria spp. Concern that prior immunological experience of the host with either the homologous Salmonella vector strain or a related strain might compromise its ability to deliver heterologous vaccine antigen was first raised in 1987 (Dougan et al., 1987) . Bao and Clements subsequently reported experimental evidence of the consequences of prior exposure of animals to the vector strain (Bao & Clements, 1991) . This work showed that both serum and mucosal antibody responses against the foreign antigen were in fact upregulated in animals with prior exposure to the vector strain. Whittle & Verma (1997) reported similar findings. Mice immunized via the intra-peritoneal route with a Salmonella dublin aroA mutant expressing heterologous antigen after being exposed to the same vector showed a higher immune response to the vectored antigen in comparison to mice without any immunological memory against the vector. Subsequently, several studies have been conducted to examine the effect of pre-existing immunity in the host against Salmonella. These results are summarized in Table 1 . The various reports are contradictory in their findings and seem to paint a rather confusing picture. Some studies concluded that pre-existing immunity against the Salmonella vector leads to stronger immune responses against the delivered antigen (Bao & Clements, 1991; Jespersgaard et al., 2001; Kohler et al., 2000a, b; Metzger et al., 2004; Saxena et al., 2009; Sevil Domènech et al., 2008; Whittle & Verma, 1997) , with others considering pre-existing immunity to be a limiting factor in the long-term use of Salmonella as an efficient vector for antigen delivery (Attridge et al., 1997; Gahan et al., 2008; Roberts et al., 1999; Sevil Domènech et al., 2007; Vindurampulle & Attridge, 2003a, b) . A slight majority of the studies listed in Table 1 (10 versus eight) indicate the upregulation of immune responses after animals have been exposed to either homologous or related strains before the delivery of heterologous antigen using a Salmonella vector. A study by Metzger and co-workers on human volunteers using Salmonella Typhi as a vector suggested that there was no change in the T cell immune response against the heterologous antigen in human volunteers who were exposed to empty vector in comparison with volunteers who were immunologically naive of the vector strain (Metzger et al., 2004) . In these subjects, humoral responses were moderately elevated in preexposed individuals. Similarly, Saxena et al. (2009) indicated higher humoral and T cell responses in mice pre-exposed to homologous or heterologous Salmonella strains. The interleukin 4 (IL4) response was significantly higher when the animal host was exposed to the homologous strain, whereas pre-exposure to a related species did not have such an impact on IL4 responses. Conversely interferon (IFN)-c responses were higher, irrespective of the strain to which mice were pre-exposed. This study also indicated that the presence of homologous or heterologous opsonizing antibodies leads to a higher uptake of Salmonella by macrophages in vitro, which may explain the higher immune responses in exposed mice. As may be expected, uptake was higher when homologous sera were used as the opsonin rather than heterologous sera. This is depicted in Fig. 2 . Conversely, there are reports that indicate that pre-existing immunity against the bacterial vector downregulates immune responses against the delivered heterologous antigen using similar or related vectors. Attridge and coworkers reported that the presence of immunity against the bacterial vector prior to the delivery of vectored antigenic \n\nMicrobiology 159 protein can downregulate immune responses in mice against the delivered antigen (Attridge et al., 1997) . Similar results were reported by Roberts et al. (1999) and Vindurampulle & Attridge (2003a, b) . However, the latter authors found that the hypo-responsiveness could be largely eliminated by exposing animals to the foreign antigen prior to vectorpriming (Vindurampulle & Attridge, 2003b) . Unfortunately, this would appear to be impractical for an immunization regimen! A study presented by Gahan et al. (2008) immunized mice with S. Typhimurium expressing C fragment of tetanus toxin antigen from an expression plasmid or as a DNA vaccine. Vaccinated mice developed humoral responses to LPS and tetC (for the plasmid-bearing vaccines). Animals from all groups (including a previously unvaccinated group) were immunized on day 182 with Salmonella expressing tetC. At this time, the anti-LPS and tetC titres were beginning to wane. Fourteen days after the second immunization, the colonization of various mouse organs was assessed. The ability to colonize was found to be significantly reduced in groups that had been previously vaccinated with Salmonella. In view of this finding, it was perhaps not surprising that at day 210 the LPS titres were not significantly different between groups receiving one or two vaccinations. More interestingly, mice that had been primed with Salmonella alone, and then boosted with Salmonella expressing tetC, induced much lower anti-tetC responses than mice that had not been primed. This argues strongly that prior immunological immunity to the vector can seriously dampen subsequent antigen-specific humoral responses. Whether the same is true for cellular responses was not evaluated. Other studies have evaluated cellular responses. A study by Sevil Domènech and colleagues reported that pre-existing anti-vector immunity seriously compromises CD8 + responses in mice when exposed to a similar strain used as vector (Sevil Domènech et al., 2007) . In contrast, another study by the same authors reported that animals exposed to related vectors induce much higher CD8 + responses when compared with animals which do not have any pre-existing Salmonella immunity (Sevil Domènech et al., 2008) . The difference between these two studies was that in the first, the prime and boost were with identical serovars, while in the second study, different serovars were used. This may point to a way of avoiding downregulation of CD8 responses by pre-existing immunity. This is important, as one of the advantages of using Salmonella (an intracellular pathogen) is that strong cellular immune responses can be induced. It must be noted that in the case of Salmonella vaccines, effects other than strictly immunological responses (particularly adaptive responses) should be considered. In the context of innate immunity, it was shown that administration of non-virulent Salmonella to gnobiotic pigs eliminated disease following challenge with a virulent strain (Foster et al., 2003) . Interestingly, protection was not by competitive exclusion, as the virulent strain was in high numbers in the gut but did not distribute systemically. The protection was proposed to be mediated by the infiltration of a large number of polymorphonuclear leukocytes into the gut, and although perhaps impractical as a general prophylactic (as the time between vaccination and infection is short), this may be an option for short-term or perhaps therapeutic vaccination (as reviewed by Foster et al., 2012) . Chickens (Gallus gallus) are a natural animal reservoir for Salmonella, which makes them an important source of Salmonella-associated gastroenteritis in humans. The ability to use oral Salmonella vaccines to immunize against heterologous pathogens would be of enormous benefit to Uptake of STM-1 by J774 macrophages, relative to the highest uptake percentage. X, Opsonized with naive sera; m, opsonized with serum from mice exposed to Salmonella enteriditis; &, opsonized with serum from mice exposed to STM-1. Pre-existing immunity against vaccine vectors the poultry industry in both broiler and layer flocks. Both vertical and horizontal transmission is associated with Salmonella in chickens (Liljebjelke et al., 2005) . Vertical transmission via in ovo transmission is particularly important, because if there is prior exposure to the vaccine strain, subsequent vaccination using an oral Salmonella vector could be severely compromised. A considerable number of studies on cross-protective immunity and competitive exclusion have been undertaken in chickens. Protective cross-reactive immunity against Salmonella strains has been demonstrated against both homologous and heterologous challenges (Beal et al., 2006) , although cross-serogroup protection was not strong. Furthermore, a recent study reported that pretreatment of newly hatched chickens with different Salmonella strains could produce a complete invasioninhibition effect on any subsequent exposure to both homologous and heterologous strains (Methner et al., 2010) . Pre-exposure with a highly invasive form of Salmonella Enteritidis caused a large influx of heterophils to the caecal mucosa in 1-day-old chicks, and subsequent heterologous caecal colonization was inhibited for a period of 48 h (Methner et al., 2010) . The implications of this kind of colonization-inhibition study on the immunological status of the affected chickens are yet to be fully elucidated. It should be noted that the studies listed in Tables 1 and 2 are controlled laboratory studies, with the possibility of a competitive exclusion component to immunity not discussed. Similarly studies of L. monocytogenes and the effects of preexisting immune responses indicate conflicting results. A study by Bouwer et al. (1999) indicates that pre-existing immune responses against the Listeria vector do not diminish immune responses against the delivered heterologous antigen, and a similar study by Starks et al. (2004) also concluded that prior exposure of mice to the empty Listeria vector did not influence anti-cancer immune responses when a similar mutant was used as a carrier of a melanoma cancer antigen. Similar findings were reported by Whitney et al. (2011) in rhesus macaques in which L. monocytyogens was used as a carrier of gag-HIV antigen. Conversely, studies by Stevens et al. (2005) in which L. monocytogens was used to deliver feline immunodeficiency virus (FIV) gag protein and as a carrier of DNA vaccines to vaccinate cats against FIV envelope protein indicated lower immune responses against the delivered antigen in cats exposed to empty Listeria vector in comparison with naive animals (Stevens et al., 2005) . Similar findings have been reported by Tvinnereim et al. (2002) and Leong et al. (2009) . However, taken together, these studies conclude that prior exposure of host animals to empty vector does not abrogate immune responses to the vectored antigen, but only reduces them somewhat. Only the study by Vijh et al. (1999) indicated that exposure to the empty vector may completely abrogate immune responses against the delivered antigens (Vijh et al., 1999) . However, these studies also indicate that downregulation of antigenspecific immune responses is highly dependent on dose and time. Leong et al. (2009) also demonstrated that the negative impact of vector-specific immune responses can also be countered by repeated immunization with the same vaccine and dose; this in effect leads to higher priming of naive T cells against the delivered antigen. Of course, such repeated vaccination may not be practicable in real-world situations. Despite the many advantages which viral vectoring can offer, pre-existing immunity is a major obstacle of many viralvectored vaccines, such as Ad serotype 5 or herpes simplex virus type 1 (HSV-1), where the rate of seroprevalence to these viruses is very high [40-45 % and 70 % (or more) of the US population, respectively] (Hocknell et al., 2002; Pichla-Gollon et al., 2009) . Vector-specific antibodies may impede the induction of immune responses to the vaccine-encoded antigens, as they may reduce the dose and time of exposure of the target cells to the vaccinated antigens (Pichla-Gollon et al., 2009; Pine et al., 2011) . In a large-scale clinical trial (STEP) of an Ad serotype 5 (AdHu5)-based HIV-1 vaccine, the vaccines showed a lack of efficacy and tended to increase the risk of HIV-1 infection in vaccine recipients who had pre-existing neutralizing antibodies to AdHu5 (Buchbinder et al., 2008) . For an HSV-1-based vector vaccine, it has been demonstrated that pre-existing anti-HSV-1 immunity reduced, but did not abolish, humoral and cellular immune responses against the vaccine-encoded antigen (Hocknell et al., 2002; Lauterbach et al., 2005) . However, Brockman and Knipe found that the induction of durable antibody responses and cellular proliferative responses to HSVencoded antigen were not affected by prior HSV immunity (Brockman & Knipe, 2002) . Similarly, pre-existing immunity to poliovirus has little effect on vaccine efficacy in a poliovirus-vectored vaccine (Mandl et al., 2001) .\n\nQuestion: What is the effect of host immune response to viral delivery vectors in the efficacy of vaccination?",
"role": "user"
},
{
"content": "pre-existing immunity is a major obstacle of many viralvectored vaccines, such as Ad serotype 5 or herpes simplex virus type 1 (HSV-1), where the rate of seroprevalence to these viruses is very high [40-45 % and 70 % (or more) of the US population, respectively]",
"role": "assistant"
}
] |
science.data_reco_mcq_sc
|
science.data_reco_mcq_sc.7289
|
[
{
"content": "Given a short query with keyphrases about a research idea in 2020 in the domain of artificial intelligence, recommend the relevant datasets to validate the research idea from options provided. A short description of the datasets is also provided to help you.\n\nQuery: I am going to study video understanding by discovering semantic flows of the video.\n\nKeyphrases: video understanding\n\nDataset description:\nImage and Video Advertisements: The Image and Video Advertisements collection consists of an image dataset of 64,832 image ads, and a video dataset of 3,477 ads. The data contains rich annotations encompassing the topic and sentiment of the ads, questions and answers describing what actions the viewer is prompted to take and the reasoning that the ad presents to persuade the viewer (\"What should I do according to this ad, and why should I do it?\nHVU: HVU is organized hierarchically in a semantic taxonomy that focuses on multi-label and multi-task video understanding as a comprehensive problem that encompasses the recognition of multiple semantic aspects in the dynamic scene. HVU contains approx.~572k videos in total with 9 million annotations for training, validation, and test set spanning over 3142 labels.\nSemArt: SemArt is a multi-modal dataset for semantic art understanding. SemArt is a collection of fine-art painting images in which each image is associated to a number of attributes and a textual artistic comment, such as those that appear in art catalogues or museum collections.\nTVQA: The TVQA dataset is a large-scale vido dataset for video question answering. It is based on 6 popular TV shows (Friends, The Big Bang Theory, How I Met Your Mother, House M.D., Grey's Anatomy, Castle).\n\nOptions:- A: Image and Video Advertisements B: HVU C: SemArt D: TVQA",
"role": "user"
},
{
"content": "TVQA",
"role": "assistant"
}
] |
science.qasa_abstractive_qa
|
science.qasa_abstractive_qa.461
|
[
{
"content": "You will be shown an excerpt from a computer science scientific research paper, followed by a question about the paper. Please answer the question. Do not include any text in your response other than the answer.\n\nContext: Furthermore, we compare computational costs during inference by measuring FLOP count and peak memory usage. For SBM-Transformer, we test the model trained under \\lambda=10^{-1}. Due to lack of support for sparse tensor operations in existing FLOP-counters, we measure FLOP counts by manually enumerating through each tensor operation. Table 3 shows that SBM-Transformer is comparably efficient across all tasks except for Text, where SBM-Transformer showed the largest average mask density. Note that while the cost of other baselines are fixed after initialization, the cost of SBM-Transformer is data-adaptive and can vary input-by-input. Further analysis and qualitative examples demonstrating the input-dependent attention mask densities can be found in Appendix C. Nonetheless, there are limitations due to sparse tensor operations being less optimized on GPU kernels. In the LRA experiments, we found that SBM-Transformer can result in longer runtimes compared to dense counterparts while its memory usage is much lower. While previous sparsity-based attention mechanisms with block-sparse attention are much more amenable for GPU computation [51, 9, 3], our work requires an architecture with better workload balancing and acceleration under unstructured sparsity, for which there is ongoing work [46, 54].\n\nQuestion: The forward step of SBM-Transformer requires additional parameters and computation compared to the original Transformer architecture due to SBM sampling. Is this additional cost outweighed by exploiting sparsity?",
"role": "user"
},
{
"content": "SBM-Transformer is efficient compared to existing baselines in terms of FLOP count and peak memory use, but can result in longer runtimes due to sparse tensor operations being less optimized on GPU kernels.",
"role": "assistant"
}
] |
science.nlmgene_ner
|
science.nlmgene_ner.189
|
[
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Gene\"].\n\nPlease return the output as a JSON object of the format: {\"Gene\": [\"IGF\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nPrdm1 Regulates Thymic Epithelial Function To Prevent Autoimmunity. Autoimmunity is largely prevented by medullary thymic epithelial cells (TECs) through their expression and presentation of tissue-specific Ags to developing thymocytes, resulting in deletion of self-reactive T cells and supporting regulatory T cell development. The transcription factor Prdm1 has been implicated in autoimmune diseases in humans through genome-wide association studies and in mice using cell type-specific deletion of Prdm1 in T and dendritic cells. In this article, we demonstrate that Prdm1 functions in TECs to prevent autoimmunity in mice. Prdm1 is expressed by a subset of mouse TECs, and conditional deletion of Prdm1 in either Keratin 14- or Foxn1-expressing cells in mice resulted in multisymptom autoimmune pathology. Notably, the development of Foxp3(+) regulatory T cells occurs normally in the absence of Blimp1. Importantly, nude mice developed anti-nuclear Abs when transplanted with Prdm1 null TECs, but not wild-type TECs, indicating that Prdm1 functions in TECs to regulate autoantibody production. We show that Prdm1 acts independently of Aire, a crucial transcription factor implicated in medullary TEC function. Collectively, our data highlight a previously unrecognized role for Prdm1 in regulating thymic epithelial function.",
"role": "user"
},
{
"content": "{\"Gene\": [\"Aire\", \"Blimp1\", \"Foxp3\", \"Keratin 14\", \"Prdm1\", \"transcription factor\", \"Foxn1\"]}",
"role": "assistant"
}
] |
science.bioasq_factoid_qa
|
science.bioasq_factoid_qa.1299
|
[
{
"content": "You will be shown biomedical passages and your task is to the answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: Which mutation is targeted by Tepotinib?\n\nContext: BACKGROUND: A splice-site mutation that results in a loss of transcription of exon 14 in the oncogenic driver MET occurs in 3 to 4% of patients with non-small-cell lung cancer (NSCLC). We evaluated the efficacy and safety of tepotinib, a highly selective MET inhibitor, in this patient population.\nCONCLUSIONS: Among patients with advanced NSCLC with a confirmed MET exon 14 skipping mutation, the use of tepotinib was associated with a partial response in approximately half the patients.\nManagement of Non-small Cell Lung Cancer Patients with MET Exon 14 Skipping Mutations.\nIf type I MET inhibitors (crizotinib, capmatinib, tepotinib, savolitinib) drug resistance is developed, type II MET inhibitors (cabozantinib, glesatinib, merestinib) can be considered.\nDrugs targeting MET 14 exon skipping mutation bring new hope to patients. MET inhibitors that are currently on the market or are about to be marketed include: crizotinib, cabozantinib, savolitinib and tepotinib.\nAt least five MET-targeted TKIs, including crizotinib, cabozantinib, capmatinib, tepotinib, and glesatinib, are being investigated clinically for patients with MET exon 14 altered-NSCLC.\nTepotinib hydrochloride is an orally bioavailable, mesenchymal-epithelial transition (MET) TKI developed mainly for selected NSCLC patients with METex14 skipping mutations.\nhibitors for MET exon 14 skipping mutation such as capmatinib and tepotinib have elucidated their effectiveness. Only a few reports have suggested their efficacy ag\nTepotinib is an oral MET inhibitor approved for metastatic non-small cell lung cancer (NSCLC) harboring MET exon 14 (METex14) skipping mutations.\nET exon 14 skipping mutation and poor performance status salvaged by marked leptomeningeal metastases response to tepotinib. We further provide measures of plasma/cerebrospinal fluid concentrations of t\nTepotinib is a key drug for cancer patients with mesenchymal-epithelial transition receptor tyrosine kinase proto-oncogene (MET) exon 14 skipping mutation.\nHerein, we report a case of a patient with metastatic lung adenocarcinoma harboring a MET exon 14 splice site mutation who has had prolonged disease control by a second-generation MET-TKI tepotinib. A 66-yr-old man was diagn\nRecently, in METex14-mutated NSCLC, multitarget tyrosine kinase inhibitors (TKIs), such as crizotinib and cabozantinib, as well as MET-selective TKIs, such as tepotinib and capmatinib, have demonstrated durable responses.\nTepotinib is an oral MET inhibitor approved for metastatic non-small cell lung cancer (NSCLC) harboring MET exon 14 (METex14) skipping mutations. Ex\n(NSCLC). The MET inhibitor tepotinib has demonstrated clinical efficacy in patients with MET exon 14 skipp\nification (~14%). In 2020, the approval of two MET-tyrosine kinase inhibitors (TKIs), capmatinib and tepotinib, for NSCLCs carrying MET∆ex14 dawned a new era for ME\nRecently approved and highly specific small-molecule inhibitors of c-MET exon 14 skipping mutations (e.g., capmatinib, tepotinib) are a new and important therapeutic option for the treatment of non-small cell lung cancer (NSCLC) patients harbouring c-MET alterations. Seve\n, in METex14-mutated NSCLC, multitarget tyrosine kinase inhibitors (TKIs), such as crizotinib and cabozantinib, as well as MET-selective TKIs, such as tepotinib and capmatinib, have demonstrated durable responses. In this study\ntudies reported that NSCLC patients harboring a METex14del responded well to MET-tyrosine kinase inhibitors (TKIs), including tepotinib. Tepotinib\nrgeted TKIs, including crizotinib, cabozantinib, capmatinib, tepotinib, and glesatinib, are being investigated clinically for patients with MET exon 14 altered-NSCLC. A further two compounds\nTepotinib is a key drug for cancer patients with mesenchymal-epithelial transition receptor tyrosine kinase proto-oncogene (MET) exon 14 skipping mutation.\n. Several tyrosine kinase inhibitors for MET exon 14 skipping mutation such as capmatinib and tepotinib have elucidated their effect\nTepotinib at this dose has obtained regulatory approval for the treatment of patients with non-small cell lung cancer harboring MET exon 14 skipping.\nActivity and bioavailability of tepotinib for leptomeningeal metastasis of NSCLC with MET exon 14 skipping mutation.\nExamining treatment-naive or tepotinib-resistant cells with MET amplification or METex14 skipping mutations identifies other receptor tyrosine kinases (RTKs) that co-exist in cells prior to tepotinib exposure and become more prominent upon tepotinib resistance.",
"role": "user"
},
{
"content": "MET Exon 14 Skipping Mutation",
"role": "assistant"
}
] |
science.bioasq_general_qa
|
science.bioasq_general_qa.634
|
[
{
"content": "You will be shown a question, followed by excerpts from biomedical research papers. Please answer the question based on the provided context. Do not include any text in your response other than the answer.\n\nQuestion: What is the link between lithium use during pregnancy and Ebstein anomaly?\n\nContext: There is a possibility that, in particular, lithium may be associated with the Ebstein anomaly but present evidence cannot definitely affirm or deny this association.\nA profound ultrasound screening of the organs should be performed in case of lithium exposition, whereas newer studies showed no increase in risk for Ebstein anomaly.\nDATA EXTRACTION AND SYNTHESIS: In the 1970s a very strong association was suggested between maternal lithium treatment during pregnancy and Ebstein's anomaly of the heart in the offspring.\nNo women who took lithium during pregnancy were found among four case-control studies of Ebstein's anomaly involving 25, 34, 59, and 89 affected children, respectively. In two cohort studies, risk ratios of 3.0 (95% confidence interval [CI], 1.2 to 7.7) and 1.5 (95% CI, 0.4 to 6.8) for all congenital anomalies have been observed.\n[Ebstein's anomaly of the tricuspid valve following prenatal exposure to lithium].\nAn Ebstein's anomaly was diagnosed, with tricuspid valve insufficiency. Exposure to lithium in utero was established. Literature reports indicate an association between exposure to lithium in utero and cardiac anomalies, notably Ebstein's anomaly.\nLithium is widely used in the treatment of bipolar affective disorders, and teratogenic effects include cardiovascular abnormalities, notably Ebstein anomaly.\nSome studies suggest that lithium might be involved as a teratogen increasing the incidence of Ebstein's anomaly in the offspring of female patients with manio-depressive psychosis and lithium-administered during pregnancy.\nCross-sectional echocardiography was used to evaluate two neonates whose mothers ingested lithium during pregnancy. In one infant, Ebstein's anomaly of the tricuspid valve was identified. In the other infant cross-sectional echocardiography provided reassurance that the infant did not have Ebstein's anomaly.\nIt is thought that this malformation is due to a direct teratogenic effect of lithium on the atrioventricular junction.\nWhen lithium hurts: a look at Ebstein anomaly.\nSome studies suggest that lithium might be involved as a teratogen increasing the incidence of Ebstein's anomaly in the offspring of female patients with manio-depressive psychosis and lithium-administered during pregnancy.\nThe search terms were lithium, pregnancy, teratogen, abnormalities (drug induced), Ebstein's anomaly, and adverse effects.<br><b>DATA EXTRACTION AND SYNTHESIS</b>: In the 1970s a very strong association was suggested between maternal lithium treatment during pregnancy and Ebstein's anomaly of the heart in the offspring.\nIn-utero exposure to lithium during the first trimester of pregnancy might be associated with an increased risk of cardiac malformations, especially the rare Ebstein's anomaly.\nDATA EXTRACTION AND SYNTHESIS In the 1970s a very strong association was suggested between maternal lithium treatment during pregnancy and Ebstein's anomaly of the heart in the offspring.\nBACKGROUND There has been concern that exposure to lithium early in pregnancy may be associated with a marked increase in the risk of Ebstein's anomaly (a right ventricular outflow tract obstruction defect) in infants and overall congenital cardiac defects, but data are conflicting and limited.\nCONCLUSIONS Maternal use of lithium during the first trimester was associated with an increased risk of cardiac malformations, including Ebstein's anomaly; the magnitude of this effect was smaller than had been previously postulated.\nOur data suggest that Ebstein's anomaly is associated with maternal mental health problems generally rather than lithium or benzodiazepines specifically; therefore, changing or stopping medications may not be preventative.\nA profound ultrasound screening of the organs should be performed in case of lithium exposition, whereas newer studies showed no increase in risk for Ebstein anomaly.\nLithium exposure during the first trimester seems to increase the risk of congenital heart disease, especially Ebstein's anomaly.\nReported neonatal problems with maternal lithium therapy include Ebstein's anomaly, poor respiratory effort and cyanosis, rhythm disturbances, nephrogenic diabetes insipidus, thyroid dysfunction, hypoglycemia, hypotonia and lethargy, hyperbilirubinemia, and large-for-gestational-age infants.\nAll case control studies regarding Ebstein's anomaly were negative, and among 222 infants with Ebstein's anomaly and 44 with tricuspid atresia none of the mothers had taken lithium during pregnancy.\n1 patient in the lithium group chose to terminate pregnancy after Ebstein's anomaly was detected by a prenatal echocardiogram.\nWhen lithium hurts: a look at Ebstein anomaly.\nLiterature reports indicate an association between exposure to lithium in utero and cardiac anomalies, notably Ebstein's anomaly.\nIn the 1970s a very strong association was suggested between maternal lithium treatment during pregnancy and Ebstein's anomaly of the heart in the offspring.\nMaternal lithium and neonatal Ebstein's anomaly: evaluation with cross-sectional echocardiography.\nNo women who took lithium during pregnancy were found among four case-control studies of Ebstein's anomaly involving 25, 34, 59, and 89 affected children, respectively.",
"role": "user"
},
{
"content": "It is generally believed that lithium use is associated with increased risk of Ebstein anomaly. However, more recent studies challenge this association.",
"role": "assistant"
}
] |
science.qasa_abstractive_qa
|
science.qasa_abstractive_qa.1427
|
[
{
"content": "You will be shown an excerpt from a computer science scientific research paper, followed by a question about the paper. Please answer the question. Do not include any text in your response other than the answer.\n\nContext: If we allow group convolution to obtain input data from different groups (as shown in Fig 1 (b)), the input and output channels will be fully related. Specifically, for the feature map generated from the previous group layer, we can first divide the channels in each group into several subgroups, then feed each group in the next layer with different subgroups. This can be efficiently and elegantly implemented by a channel shuffle operation (Fig 1 (c)): suppose a convolutional layer with g groups whose output has g\\times n channels; we first reshape the output channel dimension into (g,n), transposing and then flattening it back as the input of next layer. Note that the operation still takes effect even if the two convolutions have different numbers of groups. Moreover, channel shuffle is also differentiable, which means it can be embedded into network structures for end-to-end training. Channel shuffle operation makes it possible to build more powerful structures with multiple group convolutional layers. In the next subsection we will introduce an efficient network unit with channel shuffle and group convolution. To address the issue, a straightforward solution is to apply channel sparse connections, for example group convolutions, also on 1\\times 1 layers. By ensuring that each convolution operates only on the corresponding input channel group, group convolution significantly reduces computation cost. However, if multiple group convolutions stack together, there is one side effect: outputs from a certain channel are only derived from a small fraction of input channels. Fig 1 (a) illustrates a situation of two stacked group convolution layers. It is clear that outputs from a certain group only relate to the inputs within the group. This property blocks information flow between channel groups and weakens representation.\n\nQuestion: Each convolution operates on that corresponding input channel group. If so, how the model learns the features from entire input space?",
"role": "user"
},
{
"content": "When multiple group convolutions are stacked together, the authors use channel shuffle, which divides the channels into subgroups within groups and shuffles them in a way that each group consists of subgroups from all other groups. It lets the model learn from an entire input space despite the group convolution. However, the paper does not explicitly report such side effects when group convolutions are not stacked together.",
"role": "assistant"
}
] |
science.data_reco_mcq_mc
|
science.data_reco_mcq_mc.1072
|
[
{
"content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: I want to train a model for object counting in high-resolution aerial images.\n\nKeyphrases: object counting aerial images\n\nDataset description:\nDayton: The Dayton dataset is a dataset for ground-to-aerial (or aerial-to-ground) image translation, or cross-view image synthesis. It contains images of road views and aerial views of roads.\nRoadTracer: RoadTracer is a dataset for extraction of road networks from aerial images. It consists of a large corpus of high-resolution satellite imagery and ground truth road network graphs covering the urban core of forty cities across six countries.\nCOWC: The Cars Overhead With Context (COWC) data set is a large set of annotated cars from overhead. It is useful for training a device such as a deep neural network to learn to detect and/or count cars.\nCARPK: The Car Parking Lot Dataset (CARPK) contains nearly 90,000 cars from 4 different parking lots collected by means of drone (PHANTOM 3 PROFESSIONAL). The images are collected with the drone-view at approximate 40 meters height.\nUCF-QNRF: The UCF-QNRF dataset is a crowd counting dataset and it contains large diversity both in scenes, as well as in background types. It consists of 1535 images high-resolution images from Flickr, Web Search and Hajj footage.\n\nOptions:- A: Dayton B: RoadTracer C: COWC D: CARPK E: UCF-QNRF",
"role": "user"
},
{
"content": "[\"COWC\", \"CARPK\"]",
"role": "assistant"
}
] |
science.chia_ner
|
science.chia_ner.1914
|
[
{
"content": "You will be shown inclusion or exclusion criteria for a clinical trial. Given these criteria, your task is to extract all unique entities of the following types: [\"Observation\", \"Condition\", \"Procedure\", \"Drug\", \"Temporal\", \"Multiplier\", \"Qualifier\", \"Negation\", \"ReferencePoint\", \"Scope\", \"Value\", \"Mood\", \"Device\", \"Visit\", \"Measurement\", \"Person\"].\n\nPlease return the output as a JSON object of the format: {\"Condition\" : [\"breast cancer\", ...], \"Device\" : [\"ports\", ...], \"Drug\" : [\"Atenolol\", ...], \"Measurement\" : [\"blood pressure\", ...], \"Mood\" : [\"suspected\", ...], \"Multiplier\" : [\"more than two\", ...], \"Negation\" : [\"no\", ...], \"Observation\" : [\"history\", ...], \"Person\" : [\"male\", ...], \"Procedure\" : [\"surgery\", ...], \"Qualifier\" : [\"severe\", ...], \"ReferencePoint\" : [\"enrollment\", ...], \"Scope\" : [\"infection\", ...], \"Temporal\" : [\"one day\", ...], \"Value\" : [\"50%\", ...], \"Visit\" : [\"ICU\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nChronic pain or narcotic usage during the preceding 30 days\nInfection at or near the intended needle insertion site\nComplex or altered abdominal wall anatomy\nWeight <45kg",
"role": "user"
},
{
"content": "{\"Condition\": [\"altered abdominal wall anatomy\", \"mplex o dominal wall anatomy\\nW\", \"Infection\", \"Chronic pain\"], \"Device\": [], \"Drug\": [\"narcotic\"], \"Measurement\": [\"Weight\"], \"Mood\": [], \"Multiplier\": [], \"Negation\": [], \"Observation\": [], \"Person\": [], \"Procedure\": [], \"Qualifier\": [\"intended needle insertion site\"], \"ReferencePoint\": [], \"Scope\": [\"Chronic pain or narcotic\"], \"Temporal\": [\"during the preceding 30 days\"], \"Value\": [\"\\u003c45kg\"], \"Visit\": []}",
"role": "assistant"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.