脚本专栏 
首页 > 脚本专栏 > 浏览文章

Python3实现的爬虫爬取数据并存入mysql数据库操作示例

(编辑:jimmy 日期: 2024/5/10 浏览:3 次 )

本文实例讲述了Python3实现的爬虫爬取数据并存入mysql数据库操作。分享给大家供大家参考,具体如下:

爬一个电脑客户端的订单。罗总推荐,抓包工具用的是HttpAnalyzerStdV7,与chrome自带的F12类似。客户端有接单大厅,罗列所有订单的简要信息。当单子被接了,就不存在了。我要做的是新出订单就爬取记录到我的数据库zyc里。

设置每10s爬一次。

抓包工具页面如图:

Python3实现的爬虫爬取数据并存入mysql数据库操作示例

首先是爬虫,先找到数据存储的页面,再用正则爬出。

# -*- coding:utf-8 -*-
import re
import requests
import pymysql #Python3的mysql模块,Python2 是mysqldb
import datetime
import time
def GetResults():
  requests.adapters.DEFAULT_RETRIES = 5 #有时候报错,我在网上找的不知道啥意思,好像也没用。
  reg = [r'"id":(.*"order_no":"(.*",',
      r'"order_title":"(.*",',
      r'"publish_desc":"(.*",',
      r'"game_area":"(.*",',
      r'"order_current":"(.*",',
      r'"order_content":"(.*",',
      r'"order_hours":(.*"order_price":"(.*",',
      r'"add_price":"(.*",',
      r'"safe_money":"(.*",',
      r'"speed_money":"(.*",',
      r'"order_status_desc":"(.*",',
      r'"order_lock_desc":"(.*",',
      r'"cancel_type_desc":"(.*",',
      r'"kf_status_desc":"(.*",',
      r'"is_show_pwd":(.*"game_pwd":"(.*",',
      r'"game_account":"(.*",',
      r'"game_actor":"(.*",',
      r'"left_hours":"(.*",',
      r'"created_at":"(.*",',
      r'"account_id":"(.*",',
      r'"mobile":"(.*",',
      r'"contact":"(.*",',
      r'"qq":"(.*"},']
  results=[]
  try:
    for l in range(1,2):   #页码
      proxy = {'HTTP':'61.135.155.82:443'} #代理ip
      html = requests.get('https://www.dianjingbaozi.com/api/dailian/soldier/hall"order_no":"(.*","game_area"', html)  #获取订单编号,因为订单详情页url与订单编号有关。
      for j in range(len(outcome_reg_order_no)):
        html_order = requests.get('http://www.lpergame.com/api/dailian/order/detail"失败")
    pass

根据爬虫结果建表,这里变量名要准确。并且要设置唯一索引,使每次爬的只有新订单入库。

def mysql_create():
  mysql_host = ''
  mysql_db = 'zyc'
  mysql_user = 'zyc'
  mysql_password = ''
  mysql_port = 3306
  db = pymysql.connect(host=mysql_host, port=mysql_port, user=mysql_user, password=mysql_password, db=mysql_db,charset='utf8') # 连接数据库编码注意是utf8,不然中文结果输出会乱码
  sql_create = "CREATE TABLE DUMPLINGS (id CHAR(10),order_no CHAR(50),order_title VARCHAR(265),publish_desc VARCHAR(265),game_name VARCHAR(265),"         "game_area VARCHAR(265),game_area_distinct VARCHAR(265),order_current VARCHAR(3908),order_content VARCHAR(3908),order_hours CHAR(10),"          "order_price FLOAT(10),add_price FLOAT(10),safe_money FLOAT(10),speed_money FLOAT(10),order_status_desc VARCHAR(265),"         "order_lock_desc VARCHAR(265),cancel_type_desc VARCHAR(265),kf_status_desc VARCHAR(265),is_show_pwd TINYINT,game_pwd CHAR(50),"         "game_account VARCHAR(265),game_actor VARCHAR(265),left_hours VARCHAR(265),created_at VARCHAR(265),account_id CHAR(50),"         "mobile VARCHAR(265),mobile2 VARCHAR(265),contact VARCHAR(265),contact2 VARCHAR(265),qq VARCHAR(265),"         "PRIMARY KEY (`id`),UNIQUE KEY `no`(`order_no`))ENGINE=InnoDB AUTO_INCREMENT=12 DEFAULT CHARSET=utf8"
  sql_key="CREATE UNIQUE INDEX id ON DUMPLINGS(id)"
  cursor = db.cursor()
  cursor.execute("DROP TABLE IF EXISTS DUMPLINGS")
  cursor.execute(sql_create)# 执行SQL语句
  cursor.execute(sql_key)
  db.close() # 关闭数据库连

把数据导入Mysql,注意编码和字段之间的匹配。

def IntoMysql(results):
  mysql_host = ''
  mysql_db = 'zyc'
  mysql_user = 'zyc'
  mysql_password = ''
  mysql_port = 3306
  db = pymysql.connect(host=mysql_host, port=mysql_port, user=mysql_user, password=mysql_password, db=mysql_db,charset='utf8') # 连接数据库编码注意是utf8,不然中文结果输出会乱码
  cursor = db.cursor()
  for j in range(len(results)):
    try:
      sql = "INSERT INTO DUMPLINGS(id,order_no,order_title,publish_desc ,game_name,"          "game_area,game_area_distinct,order_current,order_content,order_hours,"          "order_price,add_price,safe_money,speed_money,order_status_desc,"          "order_lock_desc,cancel_type_desc,kf_status_desc,is_show_pwd,game_pwd,"          "game_account,game_actor,left_hours,created_at,account_id,"          "mobile,mobile2,contact,contact2,qq) VALUES ("
      for i in range(len(results[j])):
        sql = sql + "'" + results[j][i] + "',"
      sql = sql[:-1] + ")"
      sql = sql.encode('utf-8')
      cursor.execute(sql)
      db.commit()
    except:pass
  db.close()

每十秒运行一次。

mysql_create()
i=0
while True:
  results = GetResults()
  IntoMysql(results)
  i=i+1
  print("爬虫次数:",i)
  time.sleep(10)

结果如图:

 Python3实现的爬虫爬取数据并存入mysql数据库操作示例

更多关于Python相关内容可查看本站专题:《Python Socket编程技巧总结》、《Python正则表达式用法总结》、《Python数据结构与算法教程》、《Python函数使用技巧总结》、《Python字符串操作技巧汇总》、《Python+MySQL数据库程序设计入门教程》及《Python常见数据库操作技巧汇总》

希望本文所述对大家Python程序设计有所帮助。

上一篇:Python实现的爬取网易动态评论操作示例
下一篇:利用python如何处理百万条数据(适用java新手)
友情链接:杰晶网络 DDR爱好者之家 南强小屋 黑松山资源网 白云城资源网