深圳工程交易中心官网,网站排名优化电话,格尔木城乡建设规划局网站,ui设计界面效果图1.从Spring boot官网根据需求下载脚手架或者到GitHub上去搜索对应的脚手架项目,D_iao ^0^ • 文件目录如下#xff08;此处generatorConfig.xml 和 log4j2.xml文件请忽略#xff0c;后续会讲解#xff09; 2.使用Mybatis代码自动构建插件生成代码 • gradle 相关配置 // My… 1.从Spring boot官网根据需求下载脚手架或者到GitHub上去搜索对应的脚手架项目,D_iao ^0^ • 文件目录如下此处generatorConfig.xml 和 log4j2.xml文件请忽略后续会讲解 2.使用Mybatis代码自动构建插件生成代码 • gradle 相关配置 // Mybatis 代码自动生成所引入的包
compile group: org.mybatis.generator, name: mybatis-generator-core, version: 1.3.3// MyBatis代码自动生成插件工具
apply plugin: com.arenagod.gradle.MybatisGeneratorconfigurations {mybatisGenerator
}mybatisGenerator {verbose true// 配置文件路径configFile src/main/resources/generatorConfig.xml
} • generatorConfig.xml配置详解 ?xml version1.0 encodingUTF-8?
!DOCTYPE generatorConfigurationPUBLIC -//mybatis.org//DTD MyBatis Generator Configuration 1.0//ENhttp://mybatis.org/dtd/mybatis-generator-config_1_0.dtdgeneratorConfiguration!--数据库驱动包路径 --classPathEntry!--此驱动包路径可在项目的包库中找到复制过来即可--locationC:\Users\pc\.gradle\caches\modules-2\files-2.1\mysql\mysql-connector-java\5.1.38\dbbd7cd309ce167ec8367de4e41c63c2c8593cc5\mysql-connector-java-5.1.38.jar/context idmysql targetRuntimeMyBatis3!--关闭注释 --commentGeneratorproperty namesuppressAllComments valuetrue//commentGenerator!--数据库连接信息 --jdbcConnection driverClasscom.mysql.jdbc.DriverconnectionURLjdbc:mysql://127.0.0.1:3306/xxx userIdrootpassword/jdbcConnection!--生成的model 包路径 其中rootClass为model的基类,配置之后他会自动继承该类作为基类trimStrings会为model字串去空格--javaModelGenerator targetPackagecom.springboot.mybatis.demo.modeltargetProjectD:/self-code/spring-boot-mybatis/spring-boot-mybatis/src/main/javaproperty nameenableSubPackages valuetrue/property nametrimStrings valuetrue/property namerootClass valuecom.springboot.mybatis.demo.model.common.BaseModel//javaModelGenerator!--生成mapper xml文件路径 --sqlMapGenerator targetPackagemappertargetProjectD:/self-code/spring-boot-mybatis/spring-boot-mybatis/src/main/resourcesproperty nameenableSubPackages valuetrue//sqlMapGenerator!-- 生成的Mapper接口的路径 --javaClientGenerator typeXMLMAPPERtargetPackagecom.springboot.mybatis.demo.mapper targetProjectD:/self-code/spring-boot-mybatis/spring-boot-mybatis/src/main/javaproperty nameenableSubPackages valuetrue//javaClientGenerator!-- 对应的表 这个是生成Mapper xml文件的基础enableCountByExample如果为true则会在xml文件中生成样例过于累赘所以不要--table tableNametb_user domainObjectNameUserenableCountByExamplefalseenableDeleteByExamplefalseenableSelectByExamplefalseenableUpdateByExamplefalse/table/context/generatorConfiguration 以上配置中注意targetProject路径请填写绝对路径避免错误其中targetPackage是类所处的包路径(确保包是存在的否则无法生成)也就相当于 • 代码生成 配置完成之后首先得在数据库中新建对应的表然后确保数据库能正常访问最后在终端执行gradle mbGenerator或者点击如下任务 成功之后它会生成model、mapper接口以及xml文件 3.集成日志 • gradle 相关配置 compile group: org.springframework.boot, name: spring-boot-starter-log4j2, version: 1.4.0.RELEASE// 排除冲突
configurations {mybatisGeneratorcompile.exclude module: spring-boot-starter-logging
} 当没有引入spring-boot-starter-log4j2包时会报错java.lang.IllegalStateException: Logback configuration error detected Logback 配置错误声明 原因参考链接https://blog.csdn.net/blueheart20/article/details/78111350?locationNum5fps1 解决方案排除依赖 spring-boot-starter-logging what??? 排除依赖之后使用的时候又报错Failed to load class org.slf4j.impl.StaticLoggerBinder 加载slf4j.impl.StaticLoggerBinder类失败 原因参考链接https://blog.csdn.net/lwj_199011/article/details/51853110 解决方案添加依赖 spring-boot-starter-log4j2 此包所依赖的包如下 ?xml version1.0 encodingUTF-8?
project xmlnshttp://maven.apache.org/POM/4.0.0 xmlns:xsihttp://www.w3.org/2001/XMLSchema-instance xsi:schemaLocationhttp://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsdmodelVersion4.0.0/modelVersionparentgroupIdorg.springframework.boot/groupIdartifactIdspring-boot-starters/artifactIdversion1.4.0.RELEASE/version/parentartifactIdspring-boot-starter-log4j2/artifactIdnameSpring Boot Log4j 2 Starter/namedescriptionStarter for using Log4j2 for logging. An alternative tospring-boot-starter-logging/descriptionurlhttp://projects.spring.io/spring-boot//urlorganizationnamePivotal Software, Inc./nameurlhttp://www.spring.io/url/organizationpropertiesmain.basedir${basedir}/../../main.basedir/propertiesdependenciesdependencygroupIdorg.apache.logging.log4j/groupIdartifactIdlog4j-slf4j-impl/artifactId/dependencydependencygroupIdorg.apache.logging.log4j/groupIdartifactIdlog4j-api/artifactId/dependencydependencygroupIdorg.apache.logging.log4j/groupIdartifactIdlog4j-core/artifactId/dependencydependencygroupIdorg.slf4j/groupIdartifactIdjcl-over-slf4j/artifactId/dependencydependencygroupIdorg.slf4j/groupIdartifactIdjul-to-slf4j/artifactId/dependency/dependencies
/project 它依赖了 log4j-slf4j-impl 使用的是log4j2日志框架。 这里涉及到log4j、logback、log4j2以及slf4j相关概念那么它们是啥关系呢unbelievable...相关知识如下 slf4j、log4j、logback、log4j2
日志接口(slf4j)
slf4j是对所有日志框架制定的一种规范、标准、接口并不是一个框架的具体的实现因为接口并不能独立使用需要和具体的日志框架实现配合使用如log4j、logback
日志实现(log4j、logback、log4j2)
log4j是apache实现的一个开源日志组件
logback同样是由log4j的作者设计完成的拥有更好的特性用来取代log4j的一个日志框架是slf4j的原生实现
Log4j2是log4j 1.x和logback的改进版据说采用了一些新技术无锁异步、等等使得日志的吞吐量、性能比log4j 1.x提高10倍并解决了一些死锁的bug而且配置更加简单灵活官网地址 http://logging.apache.org/log4j/2.x/manual/configuration.html
为什么需要日志接口直接使用具体的实现不就行了吗
接口用于定制规范可以有多个实现使用时是面向接口的导入的包都是slf4j的包而不是具体某个日志框架中的包即直接和接口交互不直接使用实现所以可以任意的更换实现而不用更改代码中的日志相关代码。
比如slf4j定义了一套日志接口项目中使用的日志框架是logback开发中调用的所有接口都是slf4j的不直接使用logback调用是 自己的工程调用slf4j的接口slf4j的接口去调用logback的实现可以看到整个过程应用程序并没有直接使用logback当项目需要更换更加优秀的日志框架时如log4j2只需要引入Log4j2的jar和Log4j2对应的配置文件即可完全不用更改Java代码中的日志相关的代码logger.info(“xxx”)也不用修改日志相关的类的导入的包import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
使用日志接口便于更换为其他日志框架适配器作用
log4j、logback、log4j2都是一种日志具体实现框架所以既可以单独使用也可以结合slf4j一起搭配使用• 到此我们使用的是Log4j2日志框架接下来是配置log4j可以使用properties和xml两种方式配置这里使用xml形式有关log4j详细配置讲解参考链接https://blog.csdn.net/menghuanzhiming/article/details/77531977具体配置详解如下 ?xml version1.0 encodingUTF-8?
!--日志级别以及优先级排序: OFF FATAL ERROR WARN INFO DEBUG TRACE ALL --!--Configuration后面的status这个用于设置log4j2自身内部的信息输出可以不设置当设置成trace时你会看到log4j2内部各种详细输出--
!--monitorIntervalLog4j能够自动检测修改配置 文件和重新配置本身设置间隔秒数--
Configuration statusWARN!--定义一些属性--PropertiesProperty namePID????/PropertyProperty nameLOG_PATTERN[%d{yyyy-MM-dd HH:mm:ss.SSS}] - ${sys:PID} --- %c{1}: %m%n/Property/Properties!--输出源用于定义日志输出的地方--Appenders!--输出到控制台--Console nameConsole targetSYSTEM_OUT followtruePatternLayoutpattern${LOG_PATTERN}/PatternLayout/Console!--文件会打印出所有信息这个log每次运行程序会自动清空由append属性决定适合临时测试用--!--append为TRUE表示消息增加到指定文件中false表示消息覆盖指定的文件内容默认值是true--!--File nameFile fileNamelogs/log.log appendfalse--!--PatternLayout--!--pattern[%-5p] %d %c - %m%n/pattern--!--/PatternLayout--!--/File--!--这个会打印出所有的信息每次大小超过size则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩作为存档 --RollingFile nameRollingAllFile fileNamelogs/all/all.logfilePatternlogs/all/$${date:yyyy-MM}/all-%d{yyyy-MM-dd}-%i.log.gzPatternLayoutpattern${LOG_PATTERN} /Policies!--以下两个属性结合filePattern使用完成周期性的log文件封存工作--!--TimeBasedTriggeringPolicy 基于时间的触发策略以下是它的两个参数1.intervalinteger型指定两次封存动作之间的时间间隔。单位:以日志的命名精度来确定单位比如yyyy-MM-dd-HH 单位为小时yyyy-MM-dd-HH-mm 单位为分钟2.modulateboolean型说明是否对封存时间进行调制。若modulatetrue则封存时间将以0点为边界进行偏移计算。比如modulatetrueinterval4hours那么假设上次封存日志的时间为03:00则下次封存日志的时间为04:00之后的封存时间依次为08:0012:0016:00--!--TimeBasedTriggeringPolicy/--!--SizeBasedTriggeringPolicy 基于日志文件大小的触发策略以下配置解释为当单个文件达到20M后会自动将以前的内容先创建类似 2014-09年-月的目录然后按 xxx-年-月-日-序号命名打成压缩包--SizeBasedTriggeringPolicy size200 MB//Policies/RollingFile!-- 添加过滤器ThresholdFilter,可以有选择的输出某个级别及以上的类别 onMatchACCEPT onMismatchDENY意思是匹配就接受,否则直接拒绝 --RollingFile nameRollingErrorFile fileNamelogs/error/error.logfilePatternlogs/error/$${date:yyyy-MM}/%d{yyyy-MM-dd}-%i.log.gzThresholdFilter levelERROR/PatternLayoutpattern${LOG_PATTERN} /Policies!--TimeBasedTriggeringPolicy/--SizeBasedTriggeringPolicy size200 MB//Policies/RollingFileRollingFile nameRollingWarnFile fileNamelogs/warn/warn.logfilePatternlogs/warn/$${date:yyyy-MM}/%d{yyyy-MM-dd}-%i.log.gzFiltersThresholdFilter levelWARN/ThresholdFilter levelERROR onMatchDENY onMismatchNEUTRAL//FiltersPatternLayoutpattern${LOG_PATTERN} /Policies!--TimeBasedTriggeringPolicy/--SizeBasedTriggeringPolicy size200 MB//Policies/RollingFile/Appenders!--然后定义Loggers只有定义了Logger并引入的AppenderAppender才会生效--LoggersLogger nameorg.hibernate.validator.internal.util.Version levelWARN/Logger nameorg.apache.coyote.http11.Http11NioProtocol levelWARN/Logger nameorg.apache.tomcat.util.net.NioSelectorPool levelWARN/Logger nameorg.apache.catalina.startup.DigesterFactory levelERROR/Logger nameorg.springframework levelINFO /Logger namecom.springboot.mybatis.demo levelDEBUG/!--以上的logger会继承Root也就是说他们默认会输出到Root下定义的符合条件的Appender中若不想让它继承可以设置 additivityfalse并可以在Logger中设置 AppenderRef refConsole/ 指定输出到Console--Root levelINFOAppenderRef refConsole /AppenderRef refRollingAllFile/AppenderRef refRollingErrorFile/AppenderRef refRollingWarnFile//Root/Loggers
/Configuration 到此我们就算是把日志集成进去了可以在终端看到各种logvery exciting!!! log4j还可以发送邮件 添加依赖 compile group: org.springframework.boot, name: spring-boot-starter-mail, version: 2.0.0.RELEASE 修改log4j配置 在appender中添加如下!-- subject: 邮件主题 to: 接收人多个以逗号隔开 from: 发送人 replyTo: 发送账号 smtp: QQ查看链接https://service.mail.qq.com/cgi-bin/help?subtype1no167id28 smtpDebug: 开启详细日志 smtpPassword: 授权码参看https://service.mail.qq.com/cgi-bin/help?subtype1id28no1001256 smtpUsername: 用户名--SMTP nameMail subjectError Log toxxx.com fromxxxqq.com replyToxxxqq.comsmtpProtocolsmtp smtpHostsmtp.qq.com smtpPort587 bufferSize50 smtpDebugfalsesmtpPassword授权码 smtpUsernamexxx.com/SMTP在root里添加上面的appender让其生效
AppenderRef refMail levelerror/ 搞定! 4.集成MybatisProvider • Why ? 有了它我们可以通过注解的方式结合动态SQL实现基本的增删改查操作而不需要再在xml中写那么多重复繁琐的SQL了 • Come on ↓ First: 定义一个Mapper接口并实现基本操作如下: package com.springboot.mybatis.demo.mapper.common;import com.springboot.mybatis.demo.mapper.common.provider.AutoSqlProvider;
import com.springboot.mybatis.demo.mapper.common.provider.MethodProvider;
import com.springboot.mybatis.demo.model.common.BaseModel;
import org.apache.ibatis.annotations.DeleteProvider;
import org.apache.ibatis.annotations.InsertProvider;
import org.apache.ibatis.annotations.SelectProvider;
import org.apache.ibatis.annotations.UpdateProvider;import java.io.Serializable;
import java.util.List;public interface BaseMapperT extends BaseModel, Id extends Serializable {InsertProvider(type AutoSqlProvider.class, method MethodProvider.SAVE)int save(T entity);DeleteProvider(type AutoSqlProvider.class, method MethodProvider.DELETE_BY_ID)int deleteById(Id id);UpdateProvider(type AutoSqlProvider.class, method MethodProvider.UPDATE_BY_ID)int updateById(Id id);SelectProvider(type AutoSqlProvider.class, method MethodProvider.FIND_ALL)ListT findAll(T entity);SelectProvider(type AutoSqlProvider.class, method MethodProvider.FIND_BY_ID)T findById(T entity);SelectProvider(type AutoSqlProvider.class, method MethodProvider.FIND_AUTO_BY_PAGE)ListT findAutoByPage(T entity);
} 其中AutoSqlProvider是提供sql的类MethodProvider是定义好我们使用MybatisProvider需要实现的基本持久层方法这两个方法具体实现如下 package com.springboot.mybatis.demo.mapper.common.provider;import com.google.common.base.CaseFormat;
import com.springboot.mybatis.demo.mapper.common.provider.model.MybatisTable;
import com.springboot.mybatis.demo.mapper.common.provider.utils.ProviderUtils;
import org.apache.ibatis.jdbc.SQL;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;import java.lang.reflect.Field;
import java.util.List;public class AutoSqlProvider {private static Logger logger LoggerFactory.getLogger(AutoSqlProvider.class);public String findAll(Object obj) {MybatisTable mybatisTable ProviderUtils.getMybatisTable(obj);ListField fields mybatisTable.getMybatisColumnList();SQL sql new SQL();fields.forEach(field - sql.SELECT(CaseFormat.UPPER_CAMEL.to(CaseFormat.LOWER_UNDERSCORE, field.getName())));sql.FROM(mybatisTable.getName());logger.info(sql.toString());return sql.toString();}public String save(Object obj) { ...return null;}public String deleteById(String id) { ...return null;}public String findById(Object obj) { ...return null;}public String updateById(Object obj) { ...return null;}public String findAutoByPage(Object obj) {return null;}} package com.springboot.mybatis.demo.mapper.common.provider;public class MethodProvider {public static final String SAVE save;public static final String DELETE_BY_ID deleteById;public static final String UPDATE_BY_ID updateById;public static final String FIND_ALL findAll;public static final String FIND_BY_ID findById;public static final String FIND_AUTO_BY_PAGE findAutoByPage;
} 注意 1.如果你在BaseMapper中定义了某个方法一定要在SqlProvider类中去实现该方法否则将报找不到该方法的错误 2.在动态拼接SQL的时候遇到一个问题即使开启了驼峰命名转换在拼接的时候依然需要手动将表属性转换否则不会自动转换 3.在SqlProvider中的SQL log可以去除因为在集成日志的时候已经配置好了 4.ProviderUtils是通过反射的方式拿到表的一些基本属性表名表属性 • 到这里MybatisProvider的基础配置已经准备好接下去就是让每一个mapper接口去继承我们这个基础Mapper这样所有的基础增删改查都由BaseMapper负责如下 package com.springboot.mybatis.demo.mapper;import com.springboot.mybatis.demo.mapper.common.BaseMapper;
import com.springboot.mybatis.demo.model.User;import java.util.List;public interface UserMapper extends BaseMapperUser,String {} 这样UserMapper就不需要再关注那些基础的操作了wonderful !!! 5. 整合JSP过程 • 引入核心包 compile group: org.springframework.boot, name: spring-boot-starter-web, version: 2.0.0.RELEASE
// 注意此处一定要是compile或者缺省不能使用providedRuntime否则jsp无法渲染
compile group: org.apache.tomcat.embed, name: tomcat-embed-jasper, version: 9.0.6 providedRuntime group: org.springframework.boot, name: spring-boot-starter-tomcat, version: 2.0.2.RELEASE // 此行代码是用于解决内置tomcat和外部tomcat冲突问题若仅使用内置tomcat则无需此行代码 这是两个基本的包其中spring-boot-starter-web会引入tomcat也就是我们常说的SpringBoot内置的tomcat而tomcat-embed-jasper是解析jsp的包如果这个包没有引入或是有问题则无法渲染jsp页面 • 修改Application启动类 EnableTransactionManagement
SpringBootApplication
public class Application extends SpringBootServletInitializer { Overrideprotected SpringApplicationBuilder configure(SpringApplicationBuilder application) {setRegisterErrorPageFilter(false);return application.sources(Application.class);}public static void main(String[] args) throws Exception {SpringApplication.run(Application.class, args);}
} 注意启动类必须继承SpringBootServletInitializer 类并重写configure方法 • 创建jsp页面目录详情如下 • 接下來就是配置如何去获取jsp页面了有两中选择 一通过在application.properties文件中配置 spring.mvc.view.prefix/WEB-INF/views/
spring.mvc.view.suffix.jsp 然后创建controller注意在Spring 2.0之后如果要返回jsp页面必须使用Controller而不能使用RestController Controller // spring 2.0 如果要返回jsp页面必须使用Controller而不能使用RestController
public class IndexController {GetMapping(/)public String index() {return index;}
} 二通过配置文件实现这样的话直接请求 http:localhost:8080/就能直接获取到index.jsp页面省去了controller代码的书写 Configuration
EnableWebMvc
public class WebMvcConfig implements WebMvcConfigurer {// /static (or /public or /resources or /META-INF/resourcesBeanpublic InternalResourceViewResolver viewResolver() {InternalResourceViewResolver resolver new InternalResourceViewResolver();resolver.setPrefix(/WEB-INF/views/);resolver.setSuffix(.jsp);return resolver;}Overridepublic void addViewControllers(ViewControllerRegistry registry) {registry.addViewController(/).setViewName(index);}// 此方法如果不重写的话将无法找到index.jsp资源Overridepublic void configureDefaultServletHandling(DefaultServletHandlerConfigurer configurer) {configurer.enable();}
} 6.集成Shiro认证和授权以及Session • shiro核心 认证、授权、会话管理、缓存、加密 • 集成认证过程 1引包注包是按需引用的以下只是个人构建时候引用的仅供参考↓) // shirocompile group: org.apache.shiro, name: shiro-core, version: 1.3.2 // 必引包shiro核心包compile group: org.apache.shiro, name: shiro-web, version: 1.3.2 // 与web整合的包compile group: org.apache.shiro, name: shiro-spring, version: 1.3.2 // 与spring整合的包compile group: org.apache.shiro, name: shiro-ehcache, version: 1.3.2 // shiro缓存 2shiro配置文件 Configuration
public class ShiroConfig {Bean(name shiroFilter)public ShiroFilterFactoryBean shiroFilterFactoryBean() {ShiroFilterFactoryBean shiroFilterFactoryBean new ShiroFilterFactoryBean();//拦截器MapMapString,String filterChainDefinitionMap new LinkedHashMapString,String();//配置不会被拦截的路径filterChainDefinitionMap.put(/static/**, anon);//配置退出filterChainDefinitionMap.put(/logout, logout); //配置需要认证才能访问的路径filterChainDefinitionMap.put(/**, authc); //配置需要认证和admin角色才能访问的路径 filterChainDefinitionMap.put(user/**,authc,roles[admin]) //注意roles中的角色可以为多个且时and的关系即要拥有所有角色才能访问如果要or关系可自行写filtershiroFilterFactoryBean.setFilterChainDefinitionMap(filterChainDefinitionMap);//配置登陆路径shiroFilterFactoryBean.setLoginUrl(/login);//配置登陆成功后跳转的路径shiroFilterFactoryBean.setSuccessUrl(/index);//登陆失败跳回登陆界面shiroFilterFactoryBean.setUnauthorizedUrl(/login);shiroFilterFactoryBean.setSecurityManager(securityManager());return shiroFilterFactoryBean;}Beanpublic ShiroRealmOne shiroRealmOne() {ShiroRealmOne realm new ShiroRealmOne(); // 此处是自定义shiro规则return realm;}Bean(name securityManager)public DefaultWebSecurityManager securityManager() {DefaultWebSecurityManager securityManager new DefaultWebSecurityManager();securityManager.setRealm(shiroRealmOne()); securityManager.setCacheManager(ehCacheManager()); securityManager.setSessionManager(sessionManager());return securityManager;} Bean(name ehCacheManager) // 将用户信息缓存起来 public EhCacheManager ehCacheManager() {return new EhCacheManager(); } Bean(name shiroCachingSessionDAO) // shiroSession public SessionDAO shiroCachingSessionDAO() { EnterpriseCacheSessionDAO sessionDao new EnterpriseCacheSessionDAO(); sessionDao.setSessionIdGenerator(new JavaUuidSessionIdGenerator()); // SessionId生成器 sessionDao.setCacheManager(ehCacheManager()); // 缓存 return sessionDao; } Bean(name sessionManager) public DefaultWebSessionManager sessionManager() { DefaultWebSessionManager defaultWebSessionManager new DefaultWebSessionManager(); defaultWebSessionManager.setGlobalSessionTimeout(1000 * 60); defaultWebSessionManager.setSessionDAO(shiroCachingSessionDAO()); return defaultWebSessionManager; } } 自定义realm继承了AuthorizationInfo实现简单的登陆验证 package com.springboot.mybatis.demo.config.realm;import com.springboot.mybatis.demo.model.Permission;
import com.springboot.mybatis.demo.model.Role;
import com.springboot.mybatis.demo.model.User;
import com.springboot.mybatis.demo.service.PermissionService;
import com.springboot.mybatis.demo.service.RoleService;
import com.springboot.mybatis.demo.service.UserService;
import com.springboot.mybatis.demo.service.impl.PermissionServiceImpl;
import com.springboot.mybatis.demo.service.impl.RoleServiceImpl;
import com.springboot.mybatis.demo.service.impl.UserServiceImpl;
import org.apache.shiro.SecurityUtils;
import org.apache.shiro.authc.*;
import org.apache.shiro.authz.AuthorizationInfo;
import org.apache.shiro.authz.SimpleAuthorizationInfo;
import org.apache.shiro.realm.AuthorizingRealm;
import org.apache.shiro.session.Session;
import org.apache.shiro.subject.PrincipalCollection;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;import java.util.ArrayList;
import java.util.List;
import java.util.stream.Collectors;public class ShiroRealmOne extends AuthorizingRealm {private Logger logger LoggerFactory.getLogger(this.getClass());Autowiredprivate UserService userServiceImpl;Autowiredprivate RoleService roleServiceImpl;Autowiredprivate PermissionService permissionServiceImpl;//授权(这里对授权不做讲解可忽略)Overrideprotected AuthorizationInfo doGetAuthorizationInfo(PrincipalCollection principalCollection) {logger.info(doGetAuthorizationInfo principalCollection.toString());User user userServiceImpl.findByUserName((String) principalCollection.getPrimaryPrincipal());ListRole roleList roleServiceImpl.findByUserId(user.getId());ListPermission permissionList roleList ! null !roleList.isEmpty() ? permissionServiceImpl.findByRoleIds(roleList.stream().map(Role::getId).collect(Collectors.toList())) : new ArrayList();SecurityUtils.getSubject().getSession().setAttribute(String.valueOf(user.getId()), SecurityUtils.getSubject().getPrincipals());SimpleAuthorizationInfo simpleAuthorizationInfo new SimpleAuthorizationInfo();//赋予角色for (Role role : roleList) {simpleAuthorizationInfo.addRole(role.getRolName());}//赋予权限for (Permission permission : permissionList) {simpleAuthorizationInfo.addStringPermission(permission.getPrmName());}return simpleAuthorizationInfo;}// 认证Overrideprotected AuthenticationInfo doGetAuthenticationInfo(AuthenticationToken authenticationToken) throws AuthenticationException {logger.info(doGetAuthenticationInfo authenticationToken.toString());UsernamePasswordToken token (UsernamePasswordToken) authenticationToken;String userName token.getUsername();logger.info(userName token.getPassword());User user userServiceImpl.findByUserName(token.getUsername());if (user ! null) {Session session SecurityUtils.getSubject().getSession();session.setAttribute(user, user);return new SimpleAuthenticationInfo(userName, user.getUsrPassword(), getName());} else {return null;}}
} 到此shrio认证简单配置就配置好了接下来就是验证了 控制器 package com.springboot.mybatis.demo.controller;import com.springboot.mybatis.demo.common.utils.SelfStringUtils;
import com.springboot.mybatis.demo.controller.common.BaseController;
import com.springboot.mybatis.demo.model.User;
import org.apache.shiro.SecurityUtils;
import org.apache.shiro.authc.AuthenticationException;
import org.apache.shiro.authc.UsernamePasswordToken;
import org.apache.shiro.subject.Subject;
import org.springframework.stereotype.Controller;
import org.springframework.ui.Model;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;Controller
public class IndexController extends BaseController{PostMapping(login)public String login(User user, Model model) {if (user null || SelfStringUtils.isEmpty(user.getUsrName()) || SelfStringUtils.isEmpty(user.getUsrPassword()) ) {model.addAttribute(warn,请填写完整用户名和密码);return login;}Subject subject SecurityUtils.getSubject();UsernamePasswordToken token new UsernamePasswordToken(user.getUsrName(), user.getUsrPassword());token.setRememberMe(true);try {subject.login(token);} catch (AuthenticationException e) {model.addAttribute(error,用户名或密码错误,请重新登陆);return login;}return index;}GetMapping(login)public String index() {return login;}} login jsp: %--Created by IntelliJ IDEA.User: AdministratorDate: 2018/7/29Time: 14:34To change this template use File | Settings | File Templates.
--%
% page contentTypetext/html;charsetUTF-8 languagejava %
html
headtitle登陆/title
/head
bodyform actionlogin methodPOSTUser Name: input typetext nameusrNamebr /User Password: input typetext nameusrPassword /input typesubmit valueSubmit //formspan stylecolor: #b3b20a;${warn}/spanspan stylecolor:#b3130f;${error}/span
/body
/html index jsp: %--Created by IntelliJ IDEA.User: pcDate: 2018/7/23Time: 14:02To change this template use File | Settings | File Templates.
--%
% page contentTypetext/html;charsetUTF-8 languagejava %
html
headtitleTitle/title
/head
bodyh1Welcome to here!/h1
/body
/html 正常情况分析 1.未登录时访问非login接口直接跳回login页面 2.登陆失败返回账户或密码错误 3.未填写完整账户和密码返回请填写完整账户和密码 4.登陆成功跳转到index页面如果不是admin角色则不能访问user/**的路径其他可以正常访问 7.Docker 部署此项目 1基础方式部署 • 构建Dockerfile FROM docker.io/williamyeh/java8VOLUME /tmpVOLUME /opt/workspace#COPY /build/libs/spring-boot-mybatis-1.0-SNAPSHOT.war /opt/workspace/app.jarEXPOSE 8080ENTRYPOINT [java,-jar,/app.jar] 创建工作目录挂载点则可以将工作目录挂载到host机上然而也可以直接将jar包拷贝到容器中去二者择其一即可。本人较喜欢前者。 • 在Dockerfile文件目录下执行 docker build -t 镜像名tag . 构建镜像 • 因为此项目用到了Mysql所以还得构建一个Mysql容器运行命令docker run --name mysql -v /home/vagrant/docker-compose/spring-boot-compose/enjoy-dir/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORDroot mysql:5.7; • 运行刚才构建的项目镜像docker run --name myproject -v /home/vagrant/workspace/:/opt/workspace --link mysql:mysql -p 8080:8080 -d 镜像名字挂载的目录 /home/vagrant/workspace 根据自己的目录而定 • 访问8080端口测试 2使用docker-compose工具管理单机部署前提安装好docker-compose工具 • 构建docker-compose.yml文件此处除了有mysql外还加了个redis version: 3services: db: image: docker.io/mysql:5.7 command: --default-authentication-pluginmysql_native_password container_name: db volumes: - /home/vagrant/docker-compose/spring-boot-compose/enjoy-dir/mysql/data:/var/lib/mysql - /home/vagrant/docker-compose/spring-boot-compose/enjoy-dir/mysql/logs:/var/log/mysql environment: MYSQL_ROOT_PASSWORD: root MYSQL_USER: test MYSQL_PASS: test restart: always networks: - default redis: image: docker.io/redis container_name: redis command: redis-server /usr/local/etc/redis/redis.conf volumes: - /home/vagrant/docker-compose/spring-boot-compose/enjoy-dir/redis/data:/data - /home/vagrant/docker-compose/spring-boot-compose/enjoy-dir/redis/redis.conf:/usr/local/etc/redis/redis.conf networks: - default spring-boot: build: context: ./enjoy-dir/workspace dockerfile: Dockerfile image: spring-boot:1.0-SNAPSHOT depends_on: - db - redis links: - db:mysql - redis:redis ports: - 8080:8080 volumes: - /home/vagrant/docker-compose/spring-boot-compose/enjoy-dir/workspace:/opt/workspace networks: - defaultnetworks: default: driver: bridge 注意其中的挂载目录依自己情况而定redis密码可以在redis.conf文件中配置其详细配置参见https://woodenrobot.me/2018/09/03/%E4%BD%BF%E7%94%A8-docker-compose-%E5%9C%A8-Docker-%E4%B8%AD%E5%90%AF%E5%8A%A8%E5%B8%A6%E5%AF%86%E7%A0%81%E7%9A%84-Redis/ • 在docker-compose.yml文件目录下执行docker-compose up在此过程中遇到的问题mysql无法连接 - 原因root用户外部无法使用于是进入mysql中开放root用户具体参见https://www.cnblogs.com/goxcheer/p/8797377.html • 访问 8080 端口测试 3使用docker swarm多机分布式部署 • 构建compose文件基于compose 3.0其详细配置参见官方网页, version: 3
services:db:image: docker.io/mysql:5.7command: --default-authentication-pluginmysql_native_password // 密码加密机制volumes:- /home/vagrant/docker-compose/spring-boot-compose/enjoy-dir/mysql/data:/var/lib/mysql- /home/vagrant/docker-compose/spring-boot-compose/enjoy-dir/mysql/logs:/var/log/mysqlenvironment:MYSQL_ROOT_PASSWORD: rootMYSQL_USER: testMYSQL_PASS: testrestart: // 开机启动alwaysnetworks: // mysql 数据库容器连到 mynet overlay 网络只要连到该网络的容器均可以通过别名 mysql 连接数据库mynet:aliases:- mysqlports: - 3306:3306deploy: // 使用 swarm 部署需要配置一下replicas: 1 // stack 启动时默认开启多少个服务restart_policy: // 重新构建策略condition: on-failureplacement: // 部署节点constraints: [node.role worker]redis:image: docker.io/rediscommand: redis-server /usr/local/etc/redis/redis.confvolumes:- /home/vagrant/docker-compose/spring-boot-compose/enjoy-dir/redis/data:/data- /home/vagrant/docker-compose/spring-boot-compose/enjoy-dir/redis/redis.conf:/usr/local/etc/redis/redis.confnetworks:mynet:aliases:- redisports: - 6379:6379deploy: replicas: 1restart_policy: condition: on-failureplacement: constraints: [node.role worker]spring-boot:build:context: ./enjoy-dir/workspacedockerfile: Dockerfileimage:spring-boot:1.0-SNAPSHOTdepends_on:- db- redisports: - 8080:8080volumes: - /home/vagrant/docker-compose/spring-boot-compose/enjoy-dir/workspace:/opt/workspacenetworks:mynet:aliases:- spring-bootdeploy: replicas: 1 restart_policy: condition: on-failureplacement: constraints: [node.role worker]
networks:mynet: • compose 构建好了则执行 docker stack deploy -c [ compose文件路径 ] [ stack名字 ]如下 执行完成之后可以在 manager 节点通过命令 docker service ls 查看 service如下 以及查看 service 状态 • 通过 Protainer 工具可视化管理 Swarm首先在任一台机器上安装 Protainer , 安装详解参见http://www.pangxieke.com/linux/use-protainer-manage-docker.html 安装完成之后则可以进去轻松横向扩展自己的容器也就是service了自由设置 scale... 总结由 docker 基础命令创建容器在容器数目不多的情况下很实用但是容器多了怎么办 - 用 docker-compose 将容器进行分组管理这样大大提升效率一个命令即可启用和关闭多个容器。但是在单机下实用 docke-compose 确实能应付得过来但是多机怎么办 - 用 docker swarm, 是的有了docker swarm 无论多少台机器再也不用一个机器一个机器去部署docker swarm 会自动帮我们把容器部署到资源足够的机器上去这样一个高效率的分布式部署就变得 so easy... 8.读写分离 采用读写分离来降低单个数据库的压力提高访问速度 1配置数据库将原来的数据库配置改成下面的这里只配置 master 和 slave1 两个数据库 #----------------------------------------- 数据库连接单数据库----------------------------------------#spring.datasource.url:jdbc:mysql://localhost:3306/liuzj?useUnicodetruecharacterEncodinggbkzeroDateTimeBehaviorconvertToNull#spring.datasource.usernameroot#spring.datasource.password#spring.datasource.driver-class-namecom.mysql.jdbc.Driver#spring.datasource.typecom.alibaba.druid.pool.DruidDataSource#----------------------------------------- 数据库连接单数据库---------------------------------------- #----------------------------------------- 数据库连接读写分离---------------------------------------- # master写
spring.datasource.master.urljdbc:mysql://192.168.10.16:3306/test
spring.datasource.master.usernameroot
spring.datasource.master.password123456
spring.datasource.master.driver-class-namecom.mysql.jdbc.Driver
# slave1读
spring.datasource.slave1.urljdbc:mysql://192.168.10.17:3306/test
spring.datasource.slave1.usernametest
spring.datasource.slave1.password123456
spring.datasource.slave1.driver-class-namecom.mysql.jdbc.Driver
#----------------------------------------- 数据库连接读写分离---------------------------------------- 2修改初始化 dataSource将原来的 dataSource 替换成下面的 // ----------------------------------- 单数据源 start----------------------------------------// Bean
// ConfigurationProperties(prefix spring.datasource)
// public DataSource dataSource() {
// DruidDataSource druidDataSource new DruidDataSource();
// // 数据源最大连接数
// druidDataSource.setMaxActive(Application.DEFAULT_DATASOURCE_MAX_ACTIVE);
// // 数据源最小连接数
// druidDataSource.setMinIdle(Application.DEFAULT_DATASOURCE_MIN_IDLE);
// // 配置获取连接等待超时的时间
// druidDataSource.setMaxWait(Application.DEFAULT_DATASOURCE_MAX_WAIT);
// return druidDataSource;
// }// ----------------------------------- 单数据源 end----------------------------------------// ----------------------------------- 多数据源读写分离start----------------------------------------BeanConfigurationProperties(spring.datasource.master)public DataSource masterDataSource() {return DataSourceBuilder.create().build();}BeanConfigurationProperties(spring.datasource.slave1)public DataSource slave1DataSource() {return DataSourceBuilder.create().build();}Beanpublic DataSource myRoutingDataSource(Qualifier(masterDataSource) DataSource masterDataSource,Qualifier(slave1DataSource) DataSource slave1DataSource) {MapObject, Object targetDataSources new HashMap(2);targetDataSources.put(DBTypeEnum.MASTER, masterDataSource);targetDataSources.put(DBTypeEnum.SLAVE1, slave1DataSource);MyRoutingDataSource myRoutingDataSource new MyRoutingDataSource();myRoutingDataSource.setDefaultTargetDataSource(masterDataSource);myRoutingDataSource.setTargetDataSources(targetDataSources);return myRoutingDataSource;}ResourceMyRoutingDataSource myRoutingDataSource;// ----------------------------------- 多数据源读写分离end---------------------------------------- 3使用 AOP 动态切换数据源当然也可以采用 mycat具体配置自行查阅资料 /*** author admin* date 2019-02-27*/
Aspect
Component
public class DataSourceAspect {Pointcut(!annotation(com.springboot.mybatis.demo.config.annotation.Master) (execution(* com.springboot.mybatis.demo.service..*.select*(..)) || execution(* com.springboot.mybatis.demo.service..*.get*(..)) || execution(* com.springboot.mybatis.demo.service..*.find*(..))))public void readPointcut() {}Pointcut(annotation(com.springboot.mybatis.demo.config.annotation.Master) || execution(* com.springboot.mybatis.demo.service..*.insert*(..)) || execution(* com.springboot.mybatis.demo.service..*.add*(..)) || execution(* com.springboot.mybatis.demo.service..*.update*(..)) || execution(* com.springboot.mybatis.demo.service..*.edit*(..)) || execution(* com.springboot.mybatis.demo.service..*.delete*(..)) || execution(* com.springboot.mybatis.demo.service..*.remove*(..)))public void writePointcut() {}Before(readPointcut())public void read() {DBContextHolder.slave();}Before(writePointcut())public void write() {DBContextHolder.master();}/*** 另一种写法if...else... 判断哪些需要读从数据库其余的走主数据库*/
// Before(execution(* com.springboot.mybatis.demo.service.impl.*.*(..)))
// public void before(JoinPoint jp) {
// String methodName jp.getSignature().getName();
//
// if (StringUtils.startsWithAny(methodName, get, select, find)) {
// DBContextHolder.slave();
// }else {
// DBContextHolder.master();
// }
// }
} 4以上只是主要配置及步骤像 DBContextHolder 等类此处没有贴出详细参看 github 总结参看资料https://www.cnblogs.com/cjsblog/p/9712457.html 9. 集成 Quartz 分布式定时任务 • 几个经典的定时任务比较 Spring 自带定时器Scheduled是单应用服务上的不支持分布式环境。如果要支持分布式需要任务调度控制插件spring-scheduling-cluster的配合其原理是对任务加锁实现控制支持能实现分布锁的中间件。 1初始化数据库脚本可自行到官网下载 drop table if exists qrtz_fired_triggers;
drop table if exists qrtz_paused_trigger_grps;
drop table if exists qrtz_scheduler_state;
drop table if exists qrtz_locks;
drop table if exists qrtz_simple_triggers;
drop table if exists qrtz_simprop_triggers;
drop table if exists qrtz_cron_triggers;
drop table if exists qrtz_blob_triggers;
drop table if exists qrtz_triggers;
drop table if exists qrtz_job_details;
drop table if exists qrtz_calendars;create table qrtz_job_details(sched_name varchar(120) not null,job_name varchar(120) not null,job_group varchar(120) not null,description varchar(250) null,job_class_name varchar(250) not null,is_durable varchar(1) not null,is_nonconcurrent varchar(1) not null,is_update_data varchar(1) not null,requests_recovery varchar(1) not null,job_data blob null,primary key (sched_name,job_name,job_group)
);create table qrtz_triggers(sched_name varchar(120) not null,trigger_name varchar(120) not null,trigger_group varchar(120) not null,job_name varchar(120) not null,job_group varchar(120) not null,description varchar(250) null,next_fire_time bigint(13) null,prev_fire_time bigint(13) null,priority integer null,trigger_state varchar(16) not null,trigger_type varchar(8) not null,start_time bigint(13) not null,end_time bigint(13) null,calendar_name varchar(200) null,misfire_instr smallint(2) null,job_data blob null,primary key (sched_name,trigger_name,trigger_group),foreign key (sched_name,job_name,job_group)references qrtz_job_details(sched_name,job_name,job_group)
);create table qrtz_simple_triggers(sched_name varchar(120) not null,trigger_name varchar(120) not null,trigger_group varchar(120) not null,repeat_count bigint(7) not null,repeat_interval bigint(12) not null,times_triggered bigint(10) not null,primary key (sched_name,trigger_name,trigger_group),foreign key (sched_name,trigger_name,trigger_group)references qrtz_triggers(sched_name,trigger_name,trigger_group)
);create table qrtz_cron_triggers(sched_name varchar(120) not null,trigger_name varchar(120) not null,trigger_group varchar(120) not null,cron_expression varchar(200) not null,time_zone_id varchar(80),primary key (sched_name,trigger_name,trigger_group),foreign key (sched_name,trigger_name,trigger_group)references qrtz_triggers(sched_name,trigger_name,trigger_group)
);create table qrtz_simprop_triggers(sched_name varchar(120) not null,trigger_name varchar(120) not null,trigger_group varchar(120) not null,str_prop_1 varchar(512) null,str_prop_2 varchar(512) null,str_prop_3 varchar(512) null,int_prop_1 int null,int_prop_2 int null,long_prop_1 bigint null,long_prop_2 bigint null,dec_prop_1 numeric(13,4) null,dec_prop_2 numeric(13,4) null,bool_prop_1 varchar(1) null,bool_prop_2 varchar(1) null,primary key (sched_name,trigger_name,trigger_group),foreign key (sched_name,trigger_name,trigger_group)references qrtz_triggers(sched_name,trigger_name,trigger_group)
);create table qrtz_blob_triggers(sched_name varchar(120) not null,trigger_name varchar(120) not null,trigger_group varchar(120) not null,blob_data blob null,primary key (sched_name,trigger_name,trigger_group),foreign key (sched_name,trigger_name,trigger_group)references qrtz_triggers(sched_name,trigger_name,trigger_group)
);create table qrtz_calendars(sched_name varchar(120) not null,calendar_name varchar(120) not null,calendar blob not null,primary key (sched_name,calendar_name)
);create table qrtz_paused_trigger_grps(sched_name varchar(120) not null,trigger_group varchar(120) not null,primary key (sched_name,trigger_group)
);create table qrtz_fired_triggers(sched_name varchar(120) not null,entry_id varchar(95) not null,trigger_name varchar(120) not null,trigger_group varchar(120) not null,instance_name varchar(200) not null,fired_time bigint(13) not null,sched_time bigint(13) not null,priority integer not null,state varchar(16) not null,job_name varchar(200) null,job_group varchar(200) null,is_nonconcurrent varchar(1) null,requests_recovery varchar(1) null,primary key (sched_name,entry_id)
);create table qrtz_scheduler_state(sched_name varchar(120) not null,instance_name varchar(120) not null,last_checkin_time bigint(13) not null,checkin_interval bigint(13) not null,primary key (sched_name,instance_name)
);create table qrtz_locks(sched_name varchar(120) not null,lock_name varchar(40) not null,primary key (sched_name,lock_name)
); 2创建并配置好 Quartz 配置文件 # --------------------------------------- quartz ---------------------------------------
# 主要分为scheduler、threadPool、jobStore、plugin等部分
org.quartz.scheduler.instanceNameDefaultQuartzScheduler
org.quartz.scheduler.rmi.exportfalse
org.quartz.scheduler.rmi.proxyfalse
org.quartz.scheduler.wrapJobExecutionInUserTransactionfalse
# 实例化ThreadPool时使用的线程类为SimpleThreadPool
org.quartz.threadPool.classorg.quartz.simpl.SimpleThreadPool
# threadCount和threadPriority将以setter的形式注入ThreadPool实例
# 并发个数
org.quartz.threadPool.threadCount5
# 优先级
org.quartz.threadPool.threadPriority5
org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThreadtrue
org.quartz.jobStore.misfireThreshold5000
# 默认存储在内存中
#org.quartz.jobStore.class org.quartz.simpl.RAMJobStore
#持久化
org.quartz.jobStore.classorg.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.tablePrefixQRTZ_
org.quartz.jobStore.dataSourceqzDS
org.quartz.dataSource.qzDS.drivercom.mysql.jdbc.Driver
org.quartz.dataSource.qzDS.URLjdbc:mysql://192.168.10.16:3306/test?useUnicodetruecharacterEncodingUTF-8
org.quartz.dataSource.qzDS.userroot
org.quartz.dataSource.qzDS.password123456
org.quartz.dataSource.qzDS.maxConnections10
# --------------------------------------- quartz ----------------------------------------- 3初始化 Quartz 的初始Bean Configuration
public class QuartzConfig {/*** 实例化SchedulerFactoryBean对象** return SchedulerFactoryBean* throws IOException 异常*/Bean(name schedulerFactory)public SchedulerFactoryBean schedulerFactoryBean() throws IOException {SchedulerFactoryBean factoryBean new SchedulerFactoryBean();factoryBean.setQuartzProperties(quartzProperties());return factoryBean;}/*** 加载配置文件** return Properties* throws IOException 异常*/Beanpublic Properties quartzProperties() throws IOException {PropertiesFactoryBean propertiesFactoryBean new PropertiesFactoryBean();propertiesFactoryBean.setLocation(new ClassPathResource(/quartz.properties));//在quartz.properties中的属性被读取并注入后再初始化对象propertiesFactoryBean.afterPropertiesSet();return propertiesFactoryBean.getObject();}/*** quartz初始化监听器** return QuartzInitializerListener*/Beanpublic QuartzInitializerListener executorListener() {return new QuartzInitializerListener();}/*** 通过SchedulerFactoryBean获取Scheduler的实例** return Scheduler* throws IOException 异常*/Bean(name Scheduler)public Scheduler scheduler() throws IOException {return schedulerFactoryBean().getScheduler();}} 3创建 Quartz 的 service 对 job进行一些基础操作实现动态调度 job /*** author admin* date 2019-02-28*/
public interface QuartzJobService {/*** 添加任务** param scheduler Scheduler的实例* param jobClassName 任务类名称* param jobGroupName 任务群组名称* param cronExpression cron表达式* throws Exception*/void addJob(Scheduler scheduler, String jobClassName, String jobGroupName, String cronExpression) throws Exception;/*** 暂停任务** param scheduler Scheduler的实例* param jobClassName 任务类名称* param jobGroupName 任务群组名称* throws Exception*/void pauseJob(Scheduler scheduler, String jobClassName, String jobGroupName) throws Exception;/*** 继续任务** param scheduler Scheduler的实例* param jobClassName 任务类名称* param jobGroupName 任务群组名称* throws Exception*/void resumeJob(Scheduler scheduler, String jobClassName, String jobGroupName) throws Exception;/*** 重新执行任务** param scheduler Scheduler的实例* param jobClassName 任务类名称* param jobGroupName 任务群组名称* param cronExpression cron表达式* throws Exception*/void rescheduleJob(Scheduler scheduler, String jobClassName, String jobGroupName, String cronExpression) throws Exception;/*** 删除任务** param jobClassName* param jobGroupName* throws Exception*/void deleteJob(Scheduler scheduler, String jobClassName, String jobGroupName) throws Exception;/*** 获取所有任务,使用前端分页** return List*/ListQuartzJob findList();} /*** author admin* date 2019-02-28* see QuartzJobService*/
Service
public class QuartzJobServiceImpl implements QuartzJobService {Autowiredprivate QuartzJobMapper quartzJobMapper;Overridepublic void addJob(Scheduler scheduler, String jobClassName, String jobGroupName, String cronExpression) throws Exception {jobClassName com.springboot.mybatis.demo.job. jobClassName;// 启动调度器scheduler.start();//构建job信息JobDetail jobDetail JobBuilder.newJob(QuartzJobUtils.getClass(jobClassName).getClass()).withIdentity(jobClassName, jobGroupName).build();//表达式调度构建器(即任务执行的时间)CronScheduleBuilder builder CronScheduleBuilder.cronSchedule(cronExpression);//按新的cronExpression表达式构建一个新的triggerCronTrigger trigger TriggerBuilder.newTrigger().withIdentity(jobClassName, jobGroupName).withSchedule(builder).build();// 配置scheduler相关参数scheduler.scheduleJob(jobDetail, trigger);}Overridepublic void pauseJob(Scheduler scheduler, String jobClassName, String jobGroupName) throws Exception {jobClassName com.springboot.mybatis.demo.job. jobClassName;scheduler.pauseJob(JobKey.jobKey(jobClassName, jobGroupName));}Overridepublic void resumeJob(Scheduler scheduler, String jobClassName, String jobGroupName) throws Exception {jobClassName com.springboot.mybatis.demo.job. jobClassName;scheduler.resumeJob(JobKey.jobKey(jobClassName, jobGroupName));}Overridepublic void rescheduleJob(Scheduler scheduler, String jobClassName, String jobGroupName, String cronExpression) throws Exception {jobClassName com.springboot.mybatis.demo.job. jobClassName;TriggerKey triggerKey TriggerKey.triggerKey(jobClassName, jobGroupName);CronScheduleBuilder builder CronScheduleBuilder.cronSchedule(cronExpression);CronTrigger trigger (CronTrigger) scheduler.getTrigger(triggerKey);// 按新的cronExpression表达式重新构建triggertrigger trigger.getTriggerBuilder().withIdentity(jobClassName, jobGroupName).withSchedule(builder).build();// 按新的trigger重新设置job执行scheduler.rescheduleJob(triggerKey, trigger);}Overridepublic void deleteJob(Scheduler scheduler, String jobClassName, String jobGroupName) throws Exception {jobClassName com.springboot.mybatis.demo.job. jobClassName;scheduler.pauseTrigger(TriggerKey.triggerKey(jobClassName, jobGroupName));scheduler.unscheduleJob(TriggerKey.triggerKey(jobClassName, jobGroupName));scheduler.deleteJob(JobKey.jobKey(jobClassName, jobGroupName));}Overridepublic ListQuartzJob findList() {return quartzJobMapper.findList();}
} 4创建 job /*** author admin* date 2019-02-28* see BaseJob*/
public class HelloJob implements BaseJob {private final Logger logger LoggerFactory.getLogger(getClass());Overridepublic void execute(JobExecutionContext jobExecutionContext) throws JobExecutionException {logger.info(hello, Im quartz job - HelloJob);}
} 5然后就可以对 job 进行测试测试添加、暂停、重启等操作 总结 • 以上只展示集成的主要步骤详细可参看 github。 • 在分布式情况下quartz 会将任务分布在不同的机器上执行可以将项目打成jar包开启两个终端模拟分布式查看 job 的执行情况会发现 HelloJob 会在两个机器上交替执行。 • 以上集成过程参看资料https://zhuanlan.zhihu.com/p/38546754 10. 自动分表 1概述 一般来说分表都是根据最高频查询的字段进行拆分的。但是考虑到很多功能是需要全局查询所以在这种情况下是无法避免全局查询的。
对于经常需要全局查询的部分数据可以单独做个冗余表这部分就不要分表了。
对于不经常的全局查询就只能 union 了。但是通常情况下这种查询响应时间都很久。所以就需要在功能上做一定的限制。比如查询间隔之类的防止数据库长时间无响应。或者把数据同步到只读从库上在从库上进行搜索。不影响主库运行。 2分表准备 • 分表可配置化启用分表对哪张表进行分表以及分表策略 • 如何进行动态分表 3实践 • 首先定义自己的配置类 import com.beust.jcommander.internal.Lists;
import com.springboot.mybatis.demo.common.constant.Constant;
import com.springboot.mybatis.demo.common.utils.SelfStringUtils;import java.util.Arrays;
import java.util.List;
import java.util.Map;/*** 获取数据源配置信息** author lzj* date 2019-04-09*/
public class DatasourceConfig {private Master master;private Slave1 slave1;private SubTable subTable;public SubTable getSubTable() {return subTable;}public void setSubTable(SubTable subTable) {this.subTable subTable;}public Master getMaster() {return master;}public void setMaster(Master master) {this.master master;}public Slave1 getSlave1() {return slave1;}public void setSlave1(Slave1 slave1) {this.slave1 slave1;}public static class Master {private String jdbcUrl;private String username;private String password;private String driverClassName;public String getJdbcUrl() {return jdbcUrl;}public void setJdbcUrl(String jdbcUrl) {this.jdbcUrl jdbcUrl;}public String getUsername() {return username;}public void setUsername(String username) {this.username username;}public String getPassword() {return password;}public void setPassword(String password) {this.password password;}public String getDriverClassName() {return driverClassName;}public void setDriverClassName(String driverClassName) {this.driverClassName driverClassName;}}public static class Slave1 {private String jdbcUrl;private String username;private String password;private String driverClassName;public String getJdbcUrl() {return jdbcUrl;}public void setJdbcUrl(String jdbcUrl) {this.jdbcUrl jdbcUrl;}public String getUsername() {return username;}public void setUsername(String username) {this.username username;}public String getPassword() {return password;}public void setPassword(String password) {this.password password;}public String getDriverClassName() {return driverClassName;}public void setDriverClassName(String driverClassName) {this.driverClassName driverClassName;}}public static class SubTable{private boolean enable;private String schemaRoot;private String schemas;private String strategy;public String getStrategy() {return strategy;}public void setStrategy(String strategy) {this.strategy strategy;}public boolean isEnable() {return enable;}public void setEnable(boolean enable) {this.enable enable;}public String getSchemaRoot() {return schemaRoot;}public void setSchemaRoot(String schemaRoot) {this.schemaRoot schemaRoot;}public ListString getSchemas() {if (SelfStringUtils.isNotEmpty(this.schemas)) {return Arrays.asList(this.schemas.split(Constant.Symbol.COMMA));}return Lists.newArrayList();}public void setSchemas(String schemas) {this.schemas schemas;}}
} 因为此项目是配置了多数据源所以分为master以及slave两个数据源配置再加上分表配置 #-------------------自动分表配置-----------------
spring.datasource.sub-table.enable true
spring.datasource.sub-table.schema-root classpath*:sub/
spring.datasource.sub-table.schemas smg_user
spring.datasource.sub-table.strategy each_day
#-------------------自动分表配置----------------- 以上配置是写在application.properties配置文件中的。然后在将我们定义的配置类DataSourceConfig类交给IOC容器管理即 BeanConfigurationProperties(prefix spring.datasource)public DatasourceConfig datasourceConfig(){return new DatasourceConfig();} 这样我们便可以通过自定义的配置类拿到相关的配置 • 然后通过AOP切入mapper方法层每次调用mapper方法时判断该执行sql的相关实体类是否需要分表 Aspect
Component
public class BaseMapperAspect {private final static Logger logger LoggerFactory.getLogger(BaseMapperAspect.class);// Autowired
// DataSourceProperties dataSourceProperties;// Autowired
// private DataSource dataSource;
Autowiredprivate DatasourceConfig datasourceConfig;AutowiredSubTableUtilsFactory subTableUtilsFactory;Autowiredprivate DBService dbService;ResourceMyRoutingDataSource myRoutingDataSource;Pointcut(execution(* com.springboot.mybatis.demo.mapper.common.BaseMapper.*(..)))public void getMybatisTableEntity() {}/*** 获取runtime class* param joinPoint target* throws ClassNotFoundException 异常*/Before(getMybatisTableEntity())public void setThreadLocalMap(JoinPoint joinPoint) throws ClassNotFoundException {...// 自动分表MybatisTable mybatisTable MybatisTableUtils.getMybatisTable(Class.forName(actualTypeArguments[0].getTypeName()));Assert.isTrue(mybatisTable ! null, Null of the MybatisTable);String oldTableName mybatisTable.getName();if (datasourceConfig.getSubTable().isEnable() datasourceConfig.getSubTable().getSchemas().contains(oldTableName)) {ThreadLocalUtils.setSubTableName(subTableUtilsFactory.getSubTableUtil(datasourceConfig.getSubTable().getStrategy()).getTableName(oldTableName));// 判断是否需要分表dbService.autoSubTable(ThreadLocalUtils.getSubTableName(),oldTableName,datasourceConfig.getSubTable().getSchemaRoot());}else { ThreadLocalUtils.setSubTableName(oldTableName); } } 如果需要分表则会通过配置的策略获取表名然后判断数据库是否有该表如果没有则自动创建否则跳过 • 创建对应分表后则是对sql进行拦截修改这里是定义mybatis拦截器拦截sql如果该sql对应的实体类需要分表则修改sql的表名即定位到对应表进行操作 /*** 动态定位表** author liuzj* date 2019-04-15*/
Intercepts({Signature(type StatementHandler.class, method prepare, args {Connection.class,Integer.class})})
public class SubTableSqlHandler implements Interceptor {Logger logger LoggerFactory.getLogger(SubTableSqlHandler.class);Overridepublic Object intercept(Invocation invocation) throws Throwable {StatementHandler handler (StatementHandler)invocation.getTarget();BoundSql boundSql handler.getBoundSql();String sql boundSql.getSql();// 修改 sqlif (SelfStringUtils.isNotEmpty(sql)) {MybatisTable mybatisTable MybatisTableUtils.getMybatisTable(ThreadLocalUtils.get());Assert.isTrue(mybatisTable ! null, Null of the MybatisTable);Field sqlField boundSql.getClass().getDeclaredField(sql);sqlField.setAccessible(true);sqlField.set(boundSql,sql.replaceAll(mybatisTable.getName(),ThreadLocalUtils.getSubTableName()));}return invocation.proceed();}Overridepublic Object plugin(Object target) {return Plugin.wrap(target, this);}Overridepublic void setProperties(Properties properties) {}
} 以上是此项目动态分表的基本思路详细代码参见GitHub 未完待续。。。如有不妥之处请提建议和意见谢谢 转载于:https://www.cnblogs.com/lzj123/p/9277021.html