首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >R中标记词之间的计数

R中标记词之间的计数
EN

Stack Overflow用户
提问于 2019-07-27 14:58:49
回答 1查看 101关注 0票数 1

我有几个文本文件,我导入到一个语料库。每一篇课文都有几个部分,据说是在不同的日子里写的,并标明了#。每周都有$标记。在每一篇课文上,我怎么能数出一天多少字,一周多少字?文本T1有用#标记的日子,我需要计算每一天的单词。周是由$分隔的,我还需要知道一周的单词数,还有文本T2和T3 ...Tn,问题是我是如何用quanteda在R中这样做的

代码语言:javascript
复制
<T1>
 (25.02.2009) This chapter thoroughly describes the idea of analyzing text “as data” with a social science focus. It traces a brief history of this approach and distinguishes it from alternative approaches to text. It identifies the key research designs and methods for various ways that scholars in political science and international relations have used text, with references to fields such as natural language processing and computational linguistics from which some of the key methods are influenced or inherited. It surveys the varieties of ways that textual data is used and analyzed, covering key methods and pointing to applications of each. It also identifies the key stages of a research design using text as data, and critically discusses the practical and epistemological challenges at each stage.                                                        

# (26.02.2009) Probabilistic methods for classifying text form a rich tradition in machine learning and natural language processing. For many important problems, however, class prediction is uninteresting because the class is known, and instead the focus shifts to estimating latent quantities related to the text, such as affect or ideology. We focus on one such problem of interest, estimating the ideological positions of 55 Irish legislators in the 1991 Dail confidence vote. To solve the Dail scaling problem and others like it, we develop a text modeling framework that allows actors to take latent positions on a “gray” spectrum between “black” and “white” polar opposites. We are able to validate results from this model by measuring the influences exhibited by individual words, and we are able to quantify the uncertainty in the scaling estimates by using a sentence-level block bootstrap. Applying our method to the Dail debate, we are able to scale the legislators between extreme pro-government and pro-opposition in a way that reveals nuances in their speeches not captured by their votes or party affiliations.                       

# (28.02.2009) Borrowing from automated “text as data” approaches, we show how statistical scaling models can be applied to hand-coded content analysis to improve estimates of political parties’ left-right policy positions. We apply a Bayesian item-response theory (IRT) model to category counts from coded party manifestos, treating the categories as “items” and policy positions as a latent variable. This approach also produces direct estimates of how each policy category relates to left-right ideology, without having to decide these relationships in advance based on out of sample fitting, political theory, assertion, or guesswork. This approach not only prevents the misspecification endemic to a fixed-index approach, but also works well even with items that are not specifically designed to measure ideological positioning.              
# (02.03.2009) This chapter thoroughly describes the idea of analyzing text “as data” with a social science focus. It traces a brief history of this approach and distinguishes it from alternative approaches to text. It identifies the key research designs and methods for various ways that scholars in political science and international relations have used text, with references to fields such as natural language processing and computational linguistics from which some of the key methods are influenced or inherited. It surveys the varieties of ways that textual data is used and analyzed, covering key methods and pointing to applications of each. It also identifies the key stages of a research design using text as data, and critically discusses the practical and epistemological challenges at each stage. .                                           

# (03.03.2009) Probabilistic methods for classifying text form a rich tradition in machine learning and natural language processing. For many important problems, however, class prediction is uninteresting because the class is known, and instead the focus shifts to estimating latent quantities related to the text, such as affect or ideology. We focus on one such problem of interest, estimating the ideological positions of 55 Irish legislators in the 1991 Dail confidence vote. To solve the Dail scaling problem and others like it, we develop a text modeling framework that allows actors to take latent positions on a “gray” spectrum between “black” and “white” polar opposites. We are able to validate results from this model by measuring the influences exhibited by individual words, and we are able to quantify the uncertainty in the scaling estimates by using a sentence-level block bootstrap. Applying our method to the Dail debate, we are able to scale the legislators between extreme pro-government and pro-opposition in a way that reveals nuances in their speeches not captured by their votes or party affiliations.                                    

#
($)
 (04.03.2009) Borrowing from automated “text as data” approaches, we show how statistical scaling models can be applied to hand-coded content analysis to improve estimates of political parties’ left-right policy positions. We apply a Bayesian item-response theory (IRT) model to category counts from coded party manifestos, treating the categories as “items” and policy positions as a latent variable. This approach also produces direct estimates of how each policy category relates to left-right ideology, without having to decide these relationships in advance based on out of sample fitting, political theory, assertion, or guesswork. This approach not only prevents the misspecification endemic to a fixed-index approach, but also works well even with items that are not specifically designed to measure ideological positioning.                                      

# (05.03.2009) Probabilistic methods for classifying text form a rich tradition in machine learning and natural language processing. For many important problems, however, class prediction is uninteresting because the class is known, and instead the focus shifts to estimating latent quantities related to the text, such as affect or ideology. We focus on one such problem of interest, estimating the ideological positions of 55 Irish legislators in the 1991 Dail confidence vote. To solve the Dail scaling problem and others like it, we develop a text modeling framework that allows actors to take latent positions on a “gray” spectrum between “black” and “white” polar opposites. We are able to validate results from this model by measuring the influences exhibited by individual words, and we are able to quantify the uncertainty in the scaling estimates by using a sentence-level block bootstrap. Applying our method to the Dail debate, we are able to scale the legislators between extreme pro-government and pro-opposition in a way that reveals nuances in their speeches not captured by their votes or party affiliations.  
# (06.03.2009)  This chapter thoroughly describes the idea of analyzing text “as data” with a social science focus. It traces a brief history of this approach and distinguishes it from alternative approaches to text. It identifies the key research designs and methods for various ways that scholars in political science and international relations have used text, with references to fields such as natural language processing and computational linguistics from which some of the key methods are influenced or inherited. It surveys the varieties of ways that textual data is used and analyzed, covering key methods and pointing to applications of each. It also identifies the key stages of a research design using text as data, and critically discusses the practical and epistemological challenges at each stage. 

# (07.03.2009)  This chapter thoroughly describes the idea of analyzing text “as data” with a social science focus. It traces a brief history of this approach and distinguishes it from alternative approaches to text. It identifies the key research designs and methods for various ways that scholars in political science and international relations have used text, with references to fields such as natural language processing and computational linguistics from which some of the key methods are influenced or inherited. It surveys the varieties of ways that textual data is used and analyzed, covering key methods and pointing to applications of each. It also identifies the key stages of a research design using text as data, and critically discusses the practical and epistemological challenges at each stage. 

# (08.03.2009) Probabilistic methods for classifying text form a rich tradition in machine learning and natural language processing. For many important problems, however, class prediction is uninteresting because the class is known, and instead the focus shifts to estimating latent quantities related to the text, such as affect or ideology. We focus on one such problem of interest, estimating the ideological positions of 55 Irish legislators in the 1991 Dail confidence vote. To solve the Dail scaling problem and others like it, we develop a text modeling framework that allows actors to take latent positions on a “gray” spectrum between “black” and “white” polar opposites. We are able to validate results from this model by measuring the influences exhibited by individual words, and we are able to quantify the uncertainty in the scaling estimates by using a sentence-level block bootstrap. Applying our method to the Dail debate, we are able to scale the legislators between extreme pro-government and pro-opposition in a way that reveals nuances in their speeches not captured by their votes or party affiliations.                    

# (09.03.2009) Borrowing from automated “text as data” approaches, we show how statistical scaling models can be applied to hand-coded content analysis to improve estimates of political parties’ left-right policy positions. We apply a Bayesian item-response theory (IRT) model to category counts from coded party manifestos, treating the categories as “items” and policy positions as a latent variable. This approach also produces direct estimates of how each policy category relates to left-right ideology, without having to decide these relationships in advance based on out of sample fitting, political theory, assertion, or guesswork. This approach not only prevents the misspecification endemic to a fixed-index approach, but also works well even with items that are not specifically designed to measure ideological positioning.                          

# (10.03.2009) This chapter thoroughly describes the idea of analyzing text “as data” with a social science focus. It traces a brief history of this approach and distinguishes it from alternative approaches to text. It identifies the key research designs and methods for various ways that scholars in political science and international relations have used text, with references to fields such as natural language processing and computational linguistics from which some of the key methods are influenced or inherited. It surveys the varieties of ways that textual data is used and analyzed, covering key methods and pointing to applications of each. It also identifies the key stages of a research design using text as data, and critically discusses the practical and epistemological challenges at each stage.                             

#
($)
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2019-07-28 10:33:36

那些短信看起来很眼熟!

如果我将上面的内容分配给txt,那么您可以将其包装在一个quanteda语料库中,然后使用corpus_segment()在标记上拆分它。

代码语言:javascript
复制
library("quanteda")
## Package version: 1.5.0

corp <- corpus(txt) %>%
  corpus_segment(pattern = "($)", valuetype = "fixed", pattern_position = "after") %>%
  corpus_segment(pattern = "\\(\\d{2}\\.\\d{2}\\.\\d{4}\\)", valuetype = "regex", pattern_position = "before")

第一个分段沿着“周”进行分割,但是由于没有标记,我们只需要再次分割以获得日期。这就产生了:

代码语言:javascript
复制
sapply(head(texts(corp)), substring, 1, 100)
##                                                                                                text1.1.1 
## "This chapter thoroughly describes the idea of analyzing text \"as data\" with a social science focus. " 
##                                                                                                text1.1.2 
##   "Probabilistic methods for classifying text form a rich tradition in machine learning and natural lan" 
##                                                                                                text1.1.3 
## "Borrowing from automated \"text as data\" approaches, we show how statistical scaling models can be ap" 
##                                                                                                text1.1.4 
## "This chapter thoroughly describes the idea of analyzing text \"as data\" with a social science focus. " 
##                                                                                                text1.1.5 
##   "Probabilistic methods for classifying text form a rich tradition in machine learning and natural lan" 
##                                                                                                text1.2.1 
## "Borrowing from automated \"text as data\" approaches, we show how statistical scaling models can be ap"

更好的方法是整理提取出来的标签,并将其变为实际日期,以后您可以使用该日期划分为几个星期或任何您想要的其他日期范围。

代码语言:javascript
复制
# tidy up docvars
names(docvars(corp))[1] <- "date"
docvars(corp, "date") <-
  stringi::stri_replace_all_fixed(docvars(corp, "date"), c("(", ")"), c("", ""), vectorize_all = FALSE) %>%
  lubridate::dmy()

summary(corp)
## Corpus consisting of 12 documents:
## 
##       Text Types Tokens Sentences       date
##  text1.1.1    83    135         6 2009-02-25
##  text1.1.2   119    195         7 2009-02-26
##  text1.1.3    96    137         5 2009-02-28
##  text1.1.4    83    136         6 2009-03-02
##  text1.1.5   119    195         7 2009-03-03
##  text1.2.1    96    137         5 2009-03-04
##  text1.2.2   119    195         7 2009-03-05
##  text1.2.3    83    135         6 2009-03-06
##  text1.2.4    83    135         6 2009-03-07
##  text1.2.5   119    195         7 2009-03-08
##  text1.2.6    96    137         5 2009-03-09
##  text1.2.7    83    135         6 2009-03-10
## 
## Source: /private/var/folders/1v/ps2x_tvd0yg0lypdlshg_vwc0000gp/T/RtmpDG9tad/reprexd97c6e16bef8/* on x86_64 by kbenoit
## Created: Sun Jul 28 11:29:45 2019
## Notes: corpus_segment.corpus(., pattern = "\\(\\d{2}\\.\\d{2}\\.\\d{4}\\)", valuetype = "regex", pattern_position = "before")
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/57233316

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档