您的位置:首页 > 娱乐 > 明星 > 网页设计职业_网络营销怎么理解_广州seo优化电话_热搜榜排名前十

网页设计职业_网络营销怎么理解_广州seo优化电话_热搜榜排名前十

2025/11/4 10:02:39 来源:https://blog.csdn.net/2301_79731058/article/details/143269620  浏览:    关键词:网页设计职业_网络营销怎么理解_广州seo优化电话_热搜榜排名前十
网页设计职业_网络营销怎么理解_广州seo优化电话_热搜榜排名前十

数据来源:https://datahack.analyticsvidhya.com/contest/linguipedia-codefest-natural-language-processing-1/?utm_source=word-embeddings-count-word2veec&utm_medium=blogicon-default.png?t=O83Ahttps://datahack.analyticsvidhya.com/contest/linguipedia-codefest-natural-language-processing-1/?utm_source=word-embeddings-count-word2veec&utm_medium=blog

 导入库

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, classification_report
from nltk.corpus import stopwords
import nltk
import renltk.download('stopwords')
stop_words = set(stopwords.words('english'))

 文本预处理

# 文本预处理函数
def preprocess_text(text):# 移除特殊字符、数字等text = re.sub(r'[^a-zA-Z\s]', '', text)# 转换为小写text = text.lower()# 去除停用词text = ' '.join([word for word in text.split() if word not in stop_words])return text

读取数据集并处理

# 读取本地CSV文件
data = pd.read_csv("F:/Sentiment analysis/Data_dictionary/train_2kmZucJ.csv")
# 删除缺失值的行
data = data.dropna(subset=['tweet', 'label'])
# 应用预处理函数到推文内容
data['tweet'] = data['tweet'].apply(preprocess_text)
# 将数据集拆分为训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(data['tweet'], data['label'], test_size=0.2, random_state=42)

 定义训练函数

#定义训练函数
def train(model,model_name):model.fit(X_train_vectorized, y_train)predictions = model.predict(X_test_vectorized)print(model_name + " Accuracy:", accuracy_score(y_test, predictions))print(classification_report(y_test, predictions))
使用TF-IDF向量化推文内容
vectorizer = TfidfVectorizer(max_features=1000)
X_train_vectorized = vectorizer.fit_transform(X_train)
X_test_vectorized = vectorizer.transform(X_test)

以朴素贝叶斯模型为例

# 以朴素贝叶斯算法为例
nb_model = MultinomialNB()
train(nb_model,"Naive Bayes")

导入预测数据集并保存预测结果

# 导入预测数据集
test= pd.read_csv("F:/Sentiment analysis/Data_dictionary/test_oJQbWVk.csv")
test_predictions = nb_model.predict(vectorizer.transform(test['tweet']))
# 将预测结果保存
results_df = pd.DataFrame({'id': test['id'],'label': test_predictions
})
results_df.to_csv("predictions.csv", index=False)
print("预测结果已保存")

完整代码为

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, classification_report
from nltk.corpus import stopwords
import nltk
import renltk.download('stopwords')
stop_words = set(stopwords.words('english'))
# 文本预处理函数
def preprocess_text(text):# 移除特殊字符、数字等text = re.sub(r'[^a-zA-Z\s]', '', text)# 转换为小写text = text.lower()# 去除停用词text = ' '.join([word for word in text.split() if word not in stop_words])return text
# 读取本地CSV文件
data = pd.read_csv("F:/Sentiment analysis/Data_dictionary/train_2kmZucJ.csv")
# 删除缺失值的行
data = data.dropna(subset=['tweet', 'label'])
# 应用预处理函数到推文内容
data['tweet'] = data['tweet'].apply(preprocess_text)
# 将数据集拆分为训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(data['tweet'], data['label'], test_size=0.2, random_state=42)#定义训练函数
def train(model,model_name):model.fit(X_train_vectorized, y_train)predictions = model.predict(X_test_vectorized)print(model_name + " Accuracy:", accuracy_score(y_test, predictions))print(classification_report(y_test, predictions))# 使用TF-IDF向量化推文内容
vectorizer = TfidfVectorizer(max_features=1000)
X_train_vectorized = vectorizer.fit_transform(X_train)
X_test_vectorized = vectorizer.transform(X_test)
# 以朴素贝叶斯算法为例
nb_model = MultinomialNB()
train(nb_model,"Naive Bayes")
# 导入预测数据集
test= pd.read_csv("F:/Sentiment analysis/Data_dictionary/test_oJQbWVk.csv")
test_predictions = nb_model.predict(vectorizer.transform(test['tweet']))
# 将预测结果保存
results_df = pd.DataFrame({'id': test['id'],'label': test_predictions
})
results_df.to_csv("predictions.csv", index=False)
print("预测结果已保存")

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com